Title: Elementary Linear Algebra
1Elementary Linear Algebra
2Contents
- Real Vector Spaces
- Subspaces
- Linear Independence
- Basis and Dimension
- Row Space, Column Space, and Nullspace
- Rank and Nullity
3Definition (Vector Space)
- Let V be an arbitrary nonempty set of objects on
which two operations are defined - Addition
- Multiplication by scalars
- If the following axioms are satisfied by all
objects u, v, w in V and all scalars k and l,
then we call V a vector space and we call the
objects in V vectors.
4Definition (Vector Space)
- If u and v are objects in V, then u v is in V.
- u v v u
- u (v w) (u v) w
- There is an object 0 in V, called a zero vector
for V, such that 0 u u 0 u for all u in
V. - For each u in V, there is an object -u in V,
called a negative of u, such that u (-u) (-u)
u 0. - If k is any scalar and u is any object in V, then
ku is in V.
5Definition (Vector Space)
- k (u v) ku kv
- (k l) u ku lu
- k (lu) (kl) (u)
- 1u u
6Remarks
- Depending on the application, scalars may be real
numbers or complex numbers. - Vector spaces in which the scalars are complex
numbers are called complex vector spaces, and
those in which the scalars must be real are
called real vector spaces.
7Remarks
- The definition of a vector space specifies
neither the nature of the vectors nor the
operations. - Any kind of object can be a vector, and the
operations of addition and scalar multiplication
may not have any relationship or similarity to
the standard vector operations on Rn. - The only requirement is that the ten vector space
axioms be satisfied.
8Example (Rn Is a Vector Space)
- The set V Rn with the standard operations of
addition and scalar multiplication is a vector
space. - Axioms 1 and 6 follow from the definitions of the
standard operations on Rn the remaining axioms
follow from Theorem 4.1.1. - The three most important special cases of Rn are
R (the real numbers), R2 (the vectors in the
plane), and R3 (the vectors in 3-space).
9Example (2?2 Matrices)
- Show that the set V of all 2?2 matrices with real
entries is a vector space if vector addition is
defined to be matrix addition and vector scalar
multiplication is defined to be matrix scalar
multiplication.
10Example (2?2 Matrices)
- Let and
- To prove Axiom 1, we must show that u v is an
object in V that is, we must show that u v is
a 2?2 matrix.
11Example
- Similarly, Axiom 6 hold because for any real
number k we haveso that ku is a 2?2 matrix
and consequently is an object in V. - Axioms 2 follows from Theorem 1.4.1a since
12Example
- Similarly, Axiom 3 follows from part (b) of that
theorem and Axioms 7, 8, and 9 follow from part
(h), (j), and (l), respectively.
13Example
- To prove Axiom 4, let ThenSimilarly, u 0
u.
14Example
- To prove Axiom 5, letThenSimilarly, (-u) u
0. - For Axiom 10, 1u u.
15Example (Vector Space of m?n Matrices)
- The previous example is a special case of a more
general class of vector spaces. - The arguments in that example can be adapted to
show that the set V of all m?n matrices with real
entries, together with the operations matrix
addition and scalar multiplication, is a vector
space.
16Example (Vector Space of m?n Matrices)
- The m?n zero matrix is the zero vector 0, and if
u is the m?n matrix U, then matrix U is the
negative u of the vector u. - We shall denote this vector space by the symbol
Mmn
17Example (Vector Space of Real-Valued Functions)
- Let V be the set of real-valued functions defined
on the entire real line (-?,?). If f f (x) and
g g (x) are two such functions and k is any
real number, defined the sum function f g and
the scalar multiple k f, respectively, by (f
g)(x) f (x) g (x) and (k f)(x)k f(x).
18Example (Vector Space of Real-Valued Functions)
- In other words, the value of the function f g
at x is obtained by adding together the values of
f and g at x (Figure 5.1.1 a). Similarly, the
value of k f at x is k times the value of f at x
(Figure 5.1.1 b). This vector space is denoted by
F(-?,?). If f and g are vectors in this space,
then to say that f g is equivalent to saying
that f(x) g(x) for all x in the interval
(-?,?).
19Example (Vector Space of Real-Valued Functions)
- The vector 0 in F(-?,?) is the constant function
that identically zero for all value of x. The
negative of a vector f is the function f
-f(x). Geometrically, the graph of f is the
reflection of the graph of f across the x-axis
(Figure 5.1.c).
20Example (Not a Vector Space)
- Let V R2 and define addition and scalar
multiplication operations as follows If u (u1,
u2) and v (v1, v2), then define - u v (u1 v1, u2 v2)
- and if k is any real number, then define
- k u (k u1, 0)
21Example (Not a Vector Space)
- There are values of u for which Axiom 10 fails to
hold. For example, if u (u1, u2) is such that
u2 ? 0,then - 1u 1 (u1, u2) (1 u1, 0) (u1, 0) ? u
- Thus, V is not a vector space with the stated
operations.
22Every Plane Through the Origin Is a Vector Space
- Check all the axioms!
- Let V be any plane through the origin in R3.
Since R3 itself is a vector space, Axioms 2, 3,
7, 8, 9, and 10 hold for all points in R3 and
consequently for all points in the plane V. - We need only show that Axioms 1, 4, 5, and 6 are
satisfied.
23Every Plane Through the Origin Is a Vector Space
- Check all the axioms!
- Since the plane V passes through the origin, it
has an equation of the form ax by cz 0. If
u (u1, u2, u3) and v (v1, v2, v3) are points
in V, then au1 bu2 cu3 0 and av1 bv2
cv3 0. Adding these equations gives a(u1 v1)
b(u2 v2) c (u3 v3) 0. - Axiom 1 u v (u1 v1, u2 v2, u3 v3)
thus u v lies in the plane V. - Axioms 5 Multiplying au1 bu2 cu3 0 through
by -1 gives a(-u1) b(-u2) c(-u3) 0 thus,
-u (-u1, -u2, -u3) lies in V.
24The Zero Vector Space
- Let V consist of a signle object, which we denote
by 0, and define 0 0 0 and k 0 0 for all
scalars k. - We called this the zero vector space.
25Theorem 5.1.1
- Let V be a vector space, u be a vector in V, and
k a scalar then - 0 u 0
- k 0 0
- (-1) u -u
- If k u 0 , then k 0 or u 0.
26Subspaces
- Definition
- A subset W of a vector space V is called a
subspace of V if W is itself a vector space under
the addition and scalar multiplication defined on
V. - Theorem 5.2.1
- If W is a set of one or more vectors from a
vector space V, then W is a subspace of V if and
only if the following conditions hold - If u and v are vectors in W, then u v is in W.
- If k is any scalar and u is any vector in W ,
then ku is in W.
27Subspaces
- Remark
- Theorem 5.2.1 states that W is a subspace of V if
and only if W is a closed under addition
(condition (a)) and closed under scalar
multiplication (condition (b)).
28Example
- Let W be any plane through the origin and let u
and v be any vectors in W. - u v must lie in W since it is the diagonal of
the parallelogram determined by u and v, and k u
must line in W for any scalar k since k u lies on
a line through u.
29Example
- Thus, W is closed under addition and scalar
multiplication, so it is a subspace of R3.
30Example
- A line through the origin of R3 is a subspace of
R3. - Let W be a line through the origin of R3.
31Example (Not a Subspace)
- Let W be the set of all points (x, y) in R2 such
that x ? 0 and y ? 0. These are the points in the
first quadrant.
32Example (Not a Subspace)
- The set W is not a subspace of R2 since it is not
closed under scalar multiplication. - For example, v (1, 1) lines in W, but its
negative (-1)v -v (-1, -1) does not.
33Remarks
Think about set and empty set!
- Every nonzero vector space V has at least two
subspace V itself is a subspace, and the set 0
consisting of just the zero vector in V is a
subspace called the zero subspace.
34Remarks
Think about set and empty set!
- Examples of subspaces of R2 and R3
- Subspaces of R2
- 0
- Lines through the origin
- R2
- Subspaces of R3
- 0
- Lines through the origin
- Planes through origin
- R3
- They are actually the only subspaces of R2 and R3
35Subspaces of Mnn
- Since the sum of two symmetric matrices is
symmetric, and a scalar multiple of a symmetric
matrix is symmetric. Thus, the set of n?n
symmetric matrices is a subspace of the vector
space Mnn of n?n matrices.
36Subspaces of Mnn
- Similarly, the set of n?n upper triangular
matrices, the set of n?n lower triangular
matrices, and the set of n?n diagonal matrices
all form subspaces of Mnn, since each of these
sets is closed under addition and scalar
multiplication.
37Solution Space
- Solution Space of Homogeneous Systems
- If Ax b is a system of the linear equations,
then each vector x that satisfies this equation
is called a solution vector of the system. - Theorem 5.2.2 shows that the solution vectors of
a homogeneous linear system form a vector space,
which we shall call the solution space of the
system.
38Solution Space
- Theorem 5.2.2
- If Ax 0 is a homogeneous linear system of m
equations in n unknowns, then the set of solution
vectors is a subspace of Rn.
39Example
- Find the solution spaces of the linear systems.
- Each of these systems has three unknowns, so the
solutions form subspaces of R3. - Geometrically, each solution space must be a line
through the origin, a plane through the origin,
the origin only, or all of R3.
40Example
- Solution.
- (a) x 2s - 3t, y s, z t
- x 2y - 3z or x 2y 3z 0
- This is the equation of the plane through the
origin with - n (1, -2, 3) as a normal vector.
- (b) x -5t , y -t, z t
- which are parametric equations for the line
through the origin parallel to the vector v
(-5, -1, 1). - (c) The solution is x 0, y 0, z 0, so the
solution space is the origin only, that is 0. - (d) The solution are x r , y s, z t, where
r, s, and t have arbitrary values, so the
solution space is all of R3.
41Linear Combination
- Definition
- A vector w is a linear combination of the vectors
v1, v2,, vr if it can be expressed in the form w
k1v1 k2v2 kr vr where k1, k2, , kr
are scalars.
42Linear Combination
- Vectors in R3 are linear combinations of i, j,
and k - Every vector v (a, b, c) in R3 is expressible
as a linear combination of the standard basis
vectors - i (1, 0, 0), j (0, 1, 0), k (0, 0, 1)
- since
- v a(1, 0, 0) b(0, 1, 0) c(0, 0, 1) a i
b j c k
43Example
- Consider the vectors u (1, 2, -1) and v (6,
4, 2) in R3. Show that w (9, 2, 7) is a linear
combination of u and v and that w? (4, -1, 8)
is not a linear combination of u and v.
44Example
- Solution.
- In order for w to be a linear combination of u
and v, there must be scalars k1 and k2 such that
w k1u k2v - (9, 2, 7) (k1 6k2, 2k1 4k2, -k1 2k2)
- Equating corresponding components gives
- k1 6k2 9
- 2k1 4k2 2
- -k1 2k2 7
45Example
- Solution.
- Solving this system yields k1 -3, k2 2, so
- w -3u 2v
- Similarly, for w'to be a linear combination of u
and v, there must be scalars k1 and k2 such that
w' k1u k2v - (4, -1, 8) k1(1, 2, -1) k2(6, 4, 2)
- or
- (4, -1, 8) (k1 6k2, 2k1 4k2, -k1 2k2)
46Example
- Solution.
- Equating corresponding components gives
- k1 6k2 4
- 2 k1 4k2 -1
- - k1 2k2 8
- This system of equation is inconsistent, so no
such scalars k1 and k2 exist. Consequently, w' is
not a linear combination of u and v.
47Linear Combination and Spanning
- Theorem 5.2.3
- If v1, v2, , vr are vectors in a vector space V,
then - The set W of all linear combinations of v1, v2,
, vr is a subspace of V. - W is the smallest subspace of V that contain v1,
v2, , vr in the sense that every other subspace
of V that contain v1, v2, , vr must contain W.
48Linear Combination and Spanning
- Definition
- If S v1, v2, , vr is a set of vectors in a
vector space V, then the subspace W of V
containing of all linear combination of these
vectors in S is called the space spanned by v1,
v2, , vr, and we say that the vectors v1, v2, ,
vr span W. - To indicate that W is the space spanned by the
vectors in the set S v1, v2, , vr, we write
W span(S) or W spanv1, v2, , vr.
49Example
- If v1 and v2 are non-collinear vectors in R3 with
their initial points at the origin, then spanv1,
v2, which consists of all linear combinations
k1v1 k2v2 is the plane determined by v1 and v2.
50Example
- Similarly, if v is a nonzero vector in R2 and R3,
then spanv, which is the set of all scalar
multiples kv, is the linear determined by v.
51Example
- Determine whether v1 (1, 1, 2), v2 (1, 0, 1),
and v3 (2, 1, 3) span the vector space R3.
52Example
- Solution
- Is it possible that an arbitrary vector b (b1,
b2, b3) in R3 can be expressed as a linear
combination b k1v1 k2v2 k3v3 ? - b (b1, b2, b3) k1(1, 1, 3) k2(1, 0, 1)
k3(2, 1, 3) (k1k22k3, k1k3, 2k1k23k3) or - k1 k2 2k3 b1
- k1 k3 b2
- 2k1 k2 3 k3 b3
53Example
- Solution
- This system is consistent for all values of b1,
b2, and b3 if and only if the coefficient matrix - has a nonzero determinant.
- However, det(A) 0, so that v1, v2, and v3, do
not span R3.
54Theorem 5.2.4
- If S v1, v2, , vr and S? w1, w2, , wr
are two sets of vector in a vector space V, then - spanv1, v2, , vr spanw1, w2, , wr
- if and only if each vector in S is a linear
combination of these in S? and each vector in S?
is a linear combination of these in S.
55Linearly Dependent Independent
- Definition
- If S v1, v2, , vr is a nonempty set of
vector, then the vector equation k1v1 k2v2
krvr 0 has at least one solution, namely k1
0, k2 0, , kr 0. - If this the only solution, then S is called a
linearly independent set. If there are other
solutions, then S is called a linearly dependent
set.
56Linearly Dependent Independent
- Examples
- If v1 (2, -1, 0, 3), v2 (1, 2, 5, -1), and v3
(7, -1, 5, 8). - Then the set of vectors S v1, v2, v3 is
linearly dependent, since 3v1 v2 v3 0.
57Example
- Let i (1, 0, 0), j (0, 1, 0), and k (0, 0,
1) in R3. - Consider the equation k1i k2j k3k 0 ?
k1(1, 0, 0) k2(0, 1, 0) k3(0, 0, 1) (0, 0,
0)? (k1, k2, k3) (0, 0, 0)? The set S i,
j, k is linearly independent. - Similarly the vectors
- e1 (1, 0, 0, ,0), e2 (0, 1, 0, , 0),
- , en (0, 0, 0, , 1)
- form a linearly independent set in Rn.
58Example
- Remark
- To check whether a set of vectors is linear
independent or not, write down the linear
combination of the vectors and see if their
coefficients all equal zero.
59Example
- Determine whether the vectors
- v1 (1, -2, 3), v2 (5, 6, -1), v3 (3, 2, 1)
- form a linearly dependent set or a linearly
independent set.
60Example
- Solution
- Let the vector equation k1v1 k2v2 k3v3 0?
k1(1, -2, 3) k2(5, 6, -1) k3(3, 2, 1) (0,
0, 0)? k1 5k2 3k3 0 -2k1 6k2 2k3
0 3k1 k2 k3 0 ? det(A) 0?
The system has nontrivial solutions? v1,v2, and
v3 form a linearly dependent set
61Theorems
- Theorem 5.3.1
- A set with two or more vectors is
- Linearly dependent if and only if at least one of
the vectors in S is expressible as a linear
combination of the other vectors in S. - Linearly independent if and only if no vector in
S is expressible as a linear combination of the
other vectors in S.
62Theorems
- Theorem 5.3.2
- A finite set of vectors that contains the zero
vector is linearly dependent. - A set with exactly two vectors is linearly
independently if and only if neither vector is a
scalar multiple of the other.
63Theorems
- Theorem 5.3.3
- Let S v1, v2, , vr be a set of vectors in
Rn. If r gt n, then S is linearly dependent.
64Examples
- In Example 1 we saw that the vectors
- v1(2, -1, 0, 3), v2(1, 2, 5, -1), and v3(7,
-1, 5, 8) - Form a linearly dependent set. In this example
each vector is expressible as a linear
combination of the other two since it follows
from the equation 3v1v2-v30 that - v1-1/3v21/3v3, v2-3 v1v3, and v33v1v2
65Examples
- Consider the vectors i(1, 0, 0), j(0, 1, 0),
and k(0, 0, 1) in R3. - Suppose that k is expressible as
- kk1ik2j
- Then, in terms of components,
- (0, 0, 1)k1(1, 0, 0)k2(0, 1, 0) or (0, 0,
1)(k1, k2, 0) - But the last equation is not satisfied by any
values of k1 and k2, so k cannot be expressed as
a linear combination of i and j. Similarly, i is
not expressible as a linear combination of j and
k, and j is not expressible as a linear
combination of i and k.
66Geometric Interpretation of Linear Independence
- In R2 and R3, a set of two vectors is linearly
independent if and only if the vectors do not lie
on the same line when they are placed with their
initial points at the origin. - In R3, a set of three vectors is linearly
independent if and only if the vectors do not lie
in the same plane when they are placed with their
initial points at the origin.
67Nonrectangular Coordinate Systems
- The coordinate system establishes a one-to-one
correspondence between points in the plane and
ordered pairs of real numbers. - Although perpendicular coordinate axes are the
most common, any two nonparallel lines can be
used to define a coordinate system in the plane.
68Nonrectangular Coordinate Systems
- A coordinate system can be constructed by general
vectors - v1 and v2 are vectors of length 1 that points in
the positive direction of the axis - Similarly, the coordinates (a, b, c) of the point
P can be obtained by expressing as a linear
combination of the vectors
69Nonrectangular Coordinate Systems
- Informally stated, vectors that specify a
coordinate system are called basis vectors for
that system. - Although we used basis vectors of length 1 in the
preceding discussion, this is not essential
nonzero vectors of any length will suffice.
70Basis
- Definition
- If V is any vector space and S v1, v2, ,vn
is a set of vectors in V, then S is called a
basis for V if the following two conditions hold - S is linearly independent.
- S spans V.
71Basis
- Theorem 5.4.1 (Uniqueness of Basis
Representation) - If S v1, v2, ,vn is a basis for a vector
space V, then every vector v in V can be
expressed in the form - v c1v1 c2v2 cnvn
- in exactly one way.
72Coordinates Relative to a Basis
- If S v1, v2, , vn is a basis for a vector
space V, and - v c1v1 c2v2 cnvn
- is the expression for a vector v in terms of the
basis S, then the scalars c1, c2, , cn, are
called the coordinates of v relative to the basis
S. - The vector (c1, c2, , cn) in Rn constructed from
these coordinates is called the coordinate vector
of v relative to S it is denoted by - (v)S (c1, c2, , cn)
73Coordinates Relative to a Basis
- Remark
- Coordinate vectors depend not only on the basis S
but also on the order in which the basis vectors
are written. - A change in the order of the basis vectors
results in a corresponding change of order for
the entries in the coordinate vector.
74Example (Standard Basis for R3)
- Suppose that i (1, 0, 0), j (0, 1, 0), and k
(0, 0, 1), then S i, j, k is a linearly
independent set in R3. - This set also spans R3 since any vector v (a,
b, c) in R3 can be written as - v (a, b, c) a(1, 0, 0) b(0, 1, 0) c(0, 0,
1) ai bj ck
75Example (Standard Basis for R3)
- Thus, S is a basis for R3 it is called the
standard basis for R3. - Looking at the coefficients of i, j, and k, it
follows that the coordinates of v relative to
the standard basis are a, b, and c, so - (v)S (a, b, c)
- Comparing this result to v (a, b, c), we have
- v (v)S
76Standard Basis for Rn
- If e1 (1, 0, 0, , 0), e2 (0, 1, 0, , 0), ,
en (0, 0, 0, , 1), then - S e1, e2, , en
- is a linearly independent set in Rn.
- This set also spans Rn since any vector v (v1,
v2, , vn) in Rn can be written as - v v1e1 v2e2 vnen
- Thus, S is a basis for Rn it is called the
standard basis for Rn.
77Standard Basis for Rn
- The coordinates of v (v1, v2, , vn) relative
to the standard basis are v1, v2, , vn, thus - (v)S (v1, v2, , vn)
- As the previous example, we have v (v)s, so a
vector v and its coordinate vector relative to
the standard basis for Rn are the same.
78Example
- Let v1 (1, 2, 1), v2 (2, 9, 0), and v3 (3,
3, 4). Show that the set S v1, v2, v3 is a
basis for R3.
79Example
- Solution
- To show that the set S spans R3, we must show
that an arbitrary vector - b (b1, b2, b3)
- can be expressed as a linear combination
- b c1v1 c2v2 c3v3
- of the vectors in S.
- Let (b1, b2, b3) c1(1, 2, 1) c2(2, 9, 0)
c3(3, 3, 4)? c1 2c2 3c3 b1 2c19c2 3c3
b2 c1 4c3 b3? det(A) ?
0? S is a basis for R3
80Example (Representing a Vector Using Two Bases)
- Let S v1, v2, v3 be the basis for R3 in the
preceding example. - Find the coordinate vector of v (5, -1, 9) with
respect to S. - Find the vector v in R3 whose coordinate vector
with respect to the basis S is (v)s (-1, 3, 2).
81Example (Representing a Vector Using Two Bases)
- Solution (a)
- We must find scalars c1, c2, c3 such that v
c1v1 c2v2 c3v3, or, in terms of components,
(5, -1, 9) c1(1, 2, 1) c2(2, 9, 0) c3(3, 3,
4) - Solving this, we obtaining c1 1, c2 -1, c3
2. - Therefore, (v)s (1, -1, 2).
- Solution (b)
- Using the definition of the coordinate vector
(v)s, we obtain - v (-1)v1 3v2 2v3 (11, 31, 7).
82Standard Basis for Pn
- S 1, x, x2, , xn is a basis for the vector
space Pn of polynomials of the form a0 a1x
anxn. The set S is called the standard basis
for Pn.Find the coordinate vector of the
polynomial p a0 a1x a2x2 relative to the
basis S 1, x, x2 for P2 .
83Standard Basis for Pn
- Solution
- The coordinates of p a0 a1x a2x2 are the
scalar coefficients of the basis vectors 1, x,
and x2, so - (p)s(a0, a1, a2).
84Standard Basis for Mmn
- Let
- The set S M1, M2, M3, M4 is a basis for the
vector space M22 of 22 matrices. - To see that S spans M22, note that an arbitrary
vector (matrix) can be written as
85Standard Basis for Mmn
- To see that S is linearly independent, assume aM1
bM2 cM3 dM4 0. It follows that -
- Thus, a b c d 0, so S is lin. indep.
- The basis S is called the standard basis for
M22. - More generally, the standard basis for Mmn
consists of the mn different matrices with a
single 1 and zeros for the remaining entries.
86Basis for the Subspace span(S)
- If S v1, v2, ,vn is a linearly independent
set in a vector space V, then S is a basis for
the subspace span(S) since the set S span
span(S) by definition of span(S).
87Finite-Dimensional
- Definition
- A nonzero vector V is called finite-dimensional
if it contains a finite set of vector v1, v2,
,vn that forms a basis. If no such set exists,
V is called infinite-dimensional. In addition, we
shall regard the zero vector space to be
finite-dimensional.
88Finite-Dimensional
- Example
- The vector spaces Rn, Pn, and Mmn are
finite-dimensional. - The vector spaces F(-?, ?), C(- ?, ?), Cm(- ?,
?), and C8(- ?, ?) are infinite-dimensional.
89Theorems
- Theorem 5.4.2
- Let V be a finite-dimensional vector space and
v1, v2, ,vn any basis. - If a set has more than n vector, then it is
linearly dependent. - If a set has fewer than n vector, then it does
not span V.
90Theorems
- Theorem 5.4.3
- All bases for a finite-dimensional vector space
have the same number of vectors.
91Dimension
- Definition
- The dimension of a finite-dimensional vector
space V, denoted by dim(V), is defined to be the
number of vectors in a basis for V. - We define the zero vector space to have dimension
zero.
92Dimension
- Dimensions of Some Vector Spaces
- dim(Rn) n The standard basis has n vectors
- dim(Pn) n 1 The standard basis has n 1
vectors - dim(Mmn) mn The standard basis has mn
vectors
93Example
- Determine a basis for and the dimension of the
solution space of the homogeneous system - 2x1 2x2 x3 x5 0
- -x1 x2 2x3 3x4 x5 0
- x1 x2 2x3 x5 0
- x3 x4 x5 0
94Example
- Solution
- The general solution of the given system is
- x1 -s-t, x2 s,
- x3 -t, x4 0, x5 t
- Therefore, the solution vectors can be written as
95Example
- Which shows that the vectorsspan the
solution space. - Since they are also linearly independent, v1,
v2 is a basis , and the solution space is
two-dimensional.
96Theorems
- Theorem 5.4.4 (Plus/Minus Theorem)
- Let S be a nonempty set of vectors in a vector
space V. - If S is a linearly independent set, and if v is a
vector in V that is outside of span(S), then the
set S ? v that results by inserting v into S is
still linearly independent. - If v is a vector in S that is expressible as a
linear combination of other vectors in S, and if
S v denotes the set obtained by removing v
from S, then S and S v span the same space
that is, span(S) span(S v)
97Theorems
- Theorem 5.4.5
- If V is an n-dimensional vector space, and if S
is a set in V with exactly n vectors, then S is a
basis for V if either S spans V or S is linearly
independent.
98Example
- Show that v1 (-3, 7) and v2 (5, 5) form a
basis for R2 by inspection. - Solution
- Neither vector is a scalar multiple of the
other? The two vectors form a linear independent
set in the 2-D space R2? The two vectors form a
basis by Theorem 5.4.5.
99Example
- Show that v1 (2, 0, 1) , v2 (4, 0, 7), v3
(-1, 1, 4) form a basis for R3 by inspection. - Solution
- The vectors v1 and v2 form a linearly independent
set in the xz-plane. - The vector v3 is outside of the xz-plane, so the
set v1, v2 , v3 is also linearly independent. - Since R3 is three-dimensional, Theorem 5.4.5
implies that v1, v2 , v3 is a basis for R3.
100Theorems
- Theorem 5.4.6
- Let S be a finite set of vectors in a
finite-dimensional vector space V. - If S spans V but is not a basis for V, then S can
be reduced to a basis for V by removing
appropriate vectors from S. - If S is a linearly independent set that is not
already a basis for V, then S can be enlarged to
a basis for V by inserting appropriate vectors
into S.
101Theorems
- Theorem 5.4.7
- If W is a subspace of a finite-dimensional vector
space V, then dim(W) ? dim(V). - If dim(W) dim(V), then W V.
102Definition
- For an m?n matrixthe vectorsin Rn formed
form the rows of A are called the row vectors of
A, and the vectors - in Rm formed from the columns of A are called
the column vectors of A.
103Example
- Let
- The row vectors of A are
- r1 2 1 0 and r2 3 -1 4
- and the column vectors of A are
104Row Space and Column Space
- Definition
- If A is an m?n matrix, then the subspace of Rn
spanned by the row vectors of A is called the row
space of A, and the subspace of Rm spanned by the
column vectors is called the column space of A. - The solution space of the homogeneous system of
equation Ax 0, which is a subspace of Rn, is
called the nullspace of A.
105Row Space and Column Space
- Theorem 5.5.1
- A system of linear equations Ax b is consistent
if and only if b is in the column space of A.
106Example
- Let Ax b be the linear system
- Show that b is in the column space of A, and
express b as a linear combination of the column
vectors of A.
107Example
- Solution
- Solving the system by Gaussian elimination yields
- x1 2, x2 -1, x3 3
- Since the system is consistent, b is in the
column space of A. - Moreover, it follows that
108General and Particular Solutions
- Theorem 5.5.2
- If x0 denotes any single solution of a consistent
linear system Ax b, and if v1, v2, , vk form a
basis for the nullspace of A, (that is, the
solution space of the homogeneous system Ax 0),
then every solution of Ax b can be expressed in
the form - x x0 c1v1 c2v2 ckvk
- Conversely, for all choices of scalars c1, c2,
, ck the vector x in this formula is a solution
of Ax b.
109General and Particular Solutions
- Remark
- The vector x0 is called a particular solution of
Ax b. - The expression x0 c1v1 ckvk is called
the general solution of Ax b, the expression
c1v1 ckvk is called the general
solution of Ax 0. - The general solution of Ax b is the sum of any
particular solution of Ax b and the general
solution of Ax 0.
110Example (General Solution of Ax b)
- The solution to the nonhomogeneous system
- x1 3x2 2x3 2x5 0
- 2x1 6x2 5x3 2x4 4x5 3x6 -1
- 5x3 10x4 15x6 5
- 2x1 5x2 8x4 4x5 18x6 6
- is x1 -3r - 4s - 2t, x2 r, x3 -2s, x4
s, x5 t, x6 1/3
111Example (General Solution of Ax b)
- The result can be written in vector form as
- which is the general solution.
- The vector x0 is a particular solution of
nonhomogeneous system, and the linear combination
x is the general solution of the homogeneous
system.
112Example
- Find a basis for the nullspace of
113Example
- Solution
- The nullspace of A is the solution space of the
homogeneous system - 2x1 2x2 x3 x5
0 - -x1 x2 2 x3 3x4 x5 0
- x1 x2 2 x3 x5
0 - x3 x4 x5 0
- In Example 10 of Section 5.4 we showed that the
vectorsform a basis for the nullspace.
114Theorems
- Theorem 5.5.3 5.5.4
- Elementary row operations do not change both the
nullspace and row space of a matrix. - Theorem 5.5.5
- If A and B are row equivalent matrices, then
- A given set of column vectors of A is linearly
independent if and only if the corresponding
column vectors of B are linearly independent. - A given set of column vectors of A forms a basis
for the column space of A if and only if the
corresponding column vectors of B form a basis
for the column space of B.
115Theorems
- Theorem 5.5.6
- If a matrix R is in row echelon form, then the
row vectors with the leading 1s (i.e., the
nonzero row vectors) form a basis for the row
space of R, and the column vectors with the
leading 1s of the row vectors form a basis for
the column space of R.
116Bases for Row and Column Spaces
117Example
- Find bases for the row and column spaces of
118Example
- Solution
- Reducing A to row-echelon form we obtain
- By Theorem 5.5.6 and 5.5.5(b), the row and column
spaces are - r1 1 -3 4 -2 5 4
- r2 0 0 1 3 -2 -6 and
- r3 0 0 0 0 1 5
Note about the correspondence!
119Example (Basis for a Vector Space Using Row
Operations )
- Find a basis for the space spanned by the vectors
- v1 (1, -2, 0, 0, 3), v2 (2, -5, -3, -2, 6),
- v3 (0, 5, 15, 10, 0), v4 (2, 6, 18, 8, 6).
120Example (Basis for a Vector Space Using Row
Operations )
- Solution (Write down the vectors as row vectors
first!) - The nonzero row vectors in this matrix are
- w1 (1, -2, 0, 0, 3), w2 (0, 1, 3, 2, 0), w3
(0, 0, 1, 1, 0) - These vectors form a basis for the row space and
consequently form a basis for the subspace of R5
spanned by v1, v2, v3, and v4.
121Remarks
- Keeping in mind that A and R may have different
column spaces, we cannot find a basis for the
column space of A directly from the column
vectors of R. - However, it follows from Theorem 5.5.5b that if
we can find a set of column vectors of R that
forms a basis for the column space of R, then the
corresponding column vectors of A will form a
basis for the column space of A.
122Remarks
- In the previous example, the basis vectors
obtained for the column space of A consisted of
column vectors of A, but the basis vectors
obtained for the row space of A were not all
vectors of A. - Transpose of the matrix can be used to solve this
problem.
123Example (Basis for the Row Space of a Matrix )
- Find a basis for the row space of
- consisting entirely of row vectors from A.
124Example (Basis for the Row Space of a Matrix )
- The column space of AT are
- Thus, the row space of A are
- r1 1 -2 0 0 3r2 2 -5 -3 -2 6r3 2 -5
-3 -2 6
125Example (Basis and Linear Combinations )
- (a) Find a subset of the vectors v1 (1, -2, 0,
3), v2 (2, -5, -3, 6), v3 (0, 1, 3, 0), v4
(2, -1, 4, -7), v5 (5, -8, 1, 2) that forms a
basis for the space spanned by these vectors. - (b) Express each vector not in the basis as a
linear combination of the basis vectors.
126Example (Basis and Linear Combinations )
- Solution (a)
- Thus, v1, v2, v4 is a basis for the column
space of the matrix.
127Example
- Solution (b)
- We can express w3 as a linear combination of w1
and w2, express w5 as a linear combination of w1,
w2, and w4 (Why?). By inspection, these linear
combination are - w3 2w1 w2
- w3 w1 w2 w4
128Example
- We call these the dependency equations. The
corresponding relationships in the original
vectors are - v3 2v1 v2
- v3 v1 v2 v4
129Four Fundamental Matrix Spaces
- Consider a matrix A and its transpose AT
together, then there are six vector spaces of
interest - row space of A, row space of AT
- column space of A, column space of AT
- null space of A, null space of AT
- However, the fundamental matrix spaces associated
with A are - row space of A, column space of A
- null space of A, null space of AT
130Four Fundamental Matrix Spaces
- If A is an m?n matrix, then the row space of A
and nullspace of A are subspaces of Rn and the
column space of A and the nullspace of AT are
subspace of Rm - What is the relationship between the dimensions
of these four vector spaces?
131Dimension and Rank(?)
- Theorem 5.6.1
- If A is any matrix, then the row space and column
space of A have the same dimension. - Definition
- The common dimension of the row and column space
of a matrix A is called the rank of A and is
denoted by rank(A) the dimension of the
nullspace of a is called the nullity(??) of A and
is denoted by nullity(A).
132Example (Rank and Nullity)
- Find the rank and nullity of the matrix
- Solution
- The reduced row-echelon form of A is
- Since there are two nonzero rows, the row space
and column space are both two-dimensional, so
rank(A) 2.
133Example (Rank and Nullity)
- The corresponding system of equations will be
- x1 4x3 28x4 37x5 13x6 0
- x2 2x3 12x4 16 x5 5 x6 0
134Example (Rank and Nullity)
- It follows that the general solution of the
system is - x1 4r 28s 37t 13u,
- x2 2r 12s 16t 5u,
- x3 r, x4 s, x5 t, x6 u
- or
- Thus, nullity(A) 4.
135Theorems
- Theorem 5.6.2
- If A is any matrix, then rank(A) rank(AT).
- Theorem 5.6.3 (Dimension Theorem for Matrices)
- If A is a matrix with n columns, then rank(A)
nullity(A) n.
136Theorems
- Theorem 5.6.4
- If A is an m?n matrix, then
- rank(A) Number of leading variables in the
solution of Ax 0. - nullity(A) Number of parameters in the general
solution of Ax 0.
137Example (Sum of Rank and Nullity)
- The matrixhas 6 columns, so rank(A)
nullity(A) 6 - This is consistent with the previous example,
where we showed that - rank(A) 2 and nullity(A) 4
138Example
- Find the number of parameters in the general
solution of Ax 0 if A is a 5?7 matrix of rank
3. - Solution
- nullity(A) n rank(A) 7 3 4
- Thus, there are four parameters.
139Dimensions of Fundamental Spaces
- Suppose that A is an m?n matrix of rank r, then
- AT is an n?m matrix of rank r by Theorem 5.6.2
- nullity(A) n r, nullity(AT) m r by
Theorem 5.6.3
140Maximum Value for Rank
- If A is an m?n matrix? The row vectors lie in Rn
and the column vectors lie in Rm.? The row
space of A is at most n-dimensional and the
column space is at most m-dimensional. - Since the row and column space have the same
dimension (the rank A), we must conclude that if
m ? n, then the rank of A is at most the smaller
of the values of m or n. - That is, rank(A) ? min(m, n)
141Example
- If A is a 7?4 matrix, then the rank of A is at
most 4 and, consequently, the seven row vectors
must be linearly dependent. If A is a 4?7 matrix,
then again the rank of A is at most 4 and,
consequently, the seven column vectors must be
linearly dependent.
142Theorems
- Theorem 5.6.5 (The Consistency Theorem)
- If Ax b is a linear system of m equations in n
unknowns, then the following are equivalent. - Ax b is consistent.
- b is in the column space of A.
- The coefficient matrix A and the augmented matrix
A b have the same rank. - Theorem 5.6.6
- If Ax b is a linear system of m equations in n
unknowns, then the following are equivalent. - Ax b is consistent for every m?1 matrix b.
- The column vectors of A span Rm.
- rank(A) m.
143Overdetermined System
- A linear system with more equations than unknowns
is called an overdetermined linear system. - If Ax b is an overdetermined linear system of m
equations in n unknowns (so that m gt n), then the
column vectors of A cannot span Rm. - Thus, the overdetermined linear system Ax b
cannot be consistent for every possible b.
144Example
- The linear system is overdetermined, so it
cannot be consistent for all possible values of
b1, b2, b3, b4, and b5. Exact conditions under
which the system is consistent can be obtained by
solving the linear system by Gauss-Jordan
elimination.
145Example
- Thus, the system is consistent if and only if b1,
b2, b3, b4, and b5 satisfy the conditions
or, on solving this homogeneous linear
system, b15r-4s, b24r-3s, b32r-s, b4r, b5s
where r and s are arbitrary.
146Theorems
- Theorem 5.6.7
- If Ax b is consistent linear system of m
equations in n unknowns, and if A has rank r,
then the general solution of the system contains
n r parameters. - Theorem 5.6.8
- If A is an m?n matrix, then the following are
equivalent. - Ax 0 has only the trivial solution.
- The column vectors of A are linearly independent.
- Ax b has at most one solution (0 or 1) for
every m?1 matrix b.
147Examples
- Number of Parameters in a General Solution
- If A is a 5?7 matrix with rank 4, and if Ax b
is a consistent linear system, then the general
solution of the system contains 7 4 3
parameters. - An Undetermined System
- If A is a 5?7 matrix, then for every 7?1 matrix
b, the linear system Ax b is undetermined.
Thus, Ax b must be consistent for some b, and
for each such b the general solution must have 7
r parameters, where r is the rank of A.
148Theorem 5.6.9 (Equivalent Statements)
- If A is an m?n matrix, and if TA Rn ? Rn is
multiplication by A, then the following are
equivalent - A is invertible.
- Ax 0 has only the trivial solution.
- The reduced row-echelon form of A is In.
- A is expressible as a product of elementary
matrices. - Ax b is consistent for every n?1 matrix b.
- Ax b has exactly one solution for every n?1
matrix b.
149Theorem 5.6.9 (Equivalent Statements)
- The range of TA is Rn.
- TA is one-to-one.
- The column vectors of A are linearly independent.
- The row vectors of A are linearly independent.
- The column vectors of A span Rn.
- The row vectors of A span Rn.
- The column vectors of A form a basis for Rn.
- The row vectors of A form a basis for Rn.
- A has rank n.
- A has nullity 0.