Professional Documents
Culture Documents
Hw5 Solutions
Hw5 Solutions
4.1 #19(b). Determine the image and kernel of the following linear operators on P3 .
(b) L (p (x)) = p (x) p0 (x) .
Solution. Let
p (x) = a + bx + cx2
be a general element of P3 . Then,
L (p (x)) = a + bx + cx2 (b + 2cx) = (a b) + (b 2c) x + cx2
So L (p (x)) = 0 i
ab = 0
b 2c = 0
c = 0
and this simple 3 3 system has solution c = 0, b = 0, and a = 0. Hence,
ker (L) = {0}
Likewise, a polynomial q (x) = + x + x2 is in the range of L i
ab =
b 2c =
c =
and this system can be solved for any choice of constants , , and :
c =
b = + 2
a = + + 2
Thus, given any polynomial q (x) = + x + x2 in P3 the polynomial p (x) =
( + + 2) + ( + 2) x + x2 satisfies
L (p (x)) = q (x) .
This shows that L is onto; that is,
R (L) = P3
Remark. As soon as you figure out that ker (L) = {0} , you also know that
dim P3 = dim ker (L) + dim R (L) = 0 + dim R (L)
Since dim R (L) = dim P3 and R (L) P3 you know (why?) that R (L) = P3
(without the algebra).
1
4.2 #24. Let A be a 2 2 matrix, and let LA be the linear operator defined by
LA (x) = Ax. Show that
Ax = x1 a1 + x2 a2 + + xn an
where A = [a1 a2 ... an ] express A in terms of its columns and x = [x1 x2 ... xn ]T .
Since R (LA ) is the set of all the vectors LA (x) = Ax as x varies over Rn , the
displayed equation say exactly that
LA x = b;
Thus, R (LA ) = Rn .
Remarks (on (b) once (a) is done). Another proof uses the fact that LA : Rn
Rn is onto i it is one-to-one. Here it is equally easy to check onto directly as
above but you should provide the argument that LA is one-to-one; hence, onto.
Alternatively, since A is nonsingular, det (A) 6= 0 and the determinant test show
that the columns of A, a1 , a2 , ..., an , are linearly independent in Rn . Since there
are n columns these linearly independent vectors are a basis. (Why?) Since the
elements of a basis span the vector space, R (LA ) = span (a1 , a2 , ..., an ) = Rn .
2
Solution. To show that T (v1 ) , ..., T (vn ) are linearly independent in W you
must show that the equation
1 T (v1 ) + + n T (vn ) = 0
implies that all the scalars j are zero. By linearity the foregoing equation can
be expressed as
T (1 v1 + + n vn ) = 0.
Now, T (0) = 0 for any linear transformation. Since this T is one-to-one the only
vector that T maps to zero is the zero vector. Hence, the preceding equation
implies
1 v1 + + n vn = 0.
Since v1 , ..., vn are linearly independent in V, this equation implies
1 = 0, 2 = 0, ..., n = 0.
As noted at the start, this prove that T (v1 ) , ..., T (vn ) are linearly independent
in W.
Remark. Several of you assumed that S was a finite set. This made the proof
a little easier notationally. The set S need not be a finite set of vectors. For
example, P the vector space of all polynomials
does not have a finite spanning
set but is spanned by S = 1, x, x2 , x3 , ... .