Download as pdf or txt
Download as pdf or txt
You are on page 1of 43

Vectors

Introduction to Applied Linear Algebra: Vectors,


Matrices, and Least Squares

Stephen Boyd Lieven Vandenberghe


VECTOR SPACES AND SUBSPACES
▪ Definition: A vector space is a nonempty set V of
objects, called vectors, on which are defined two
operations, called addition and multiplication by
scalars (real numbers), subject to the ten axioms (or
rules) listed below. The axioms must hold for all
vectors u, v, and w in V and for all scalars c and d.
1. The sum of u and v, denoted by 𝐮 + 𝐯, is in V.
2. 𝐮 + 𝐯 = 𝐯 + 𝐮
3. (𝐮 + 𝐯) + 𝐰 = 𝐮 + (𝐯 + 𝐰)
4. There is a zero vector 0 in V such that 𝐮 + 𝟎 = 𝐮.
VECTOR SPACES AND SUBSPACES
5. For each u in V, there is a vector −𝐮 in V such that
𝐮 + (−𝐮) = 𝟎.
6. The scalar multiple of u by c, denoted by cu, is in V.
7. 𝑐(𝐮 + 𝐯) = 𝑐𝐮 + 𝑐𝐯
8. (𝑐 + 𝑑)𝐮 = 𝑐𝐮 + 𝑑𝐮
9. 𝑐 𝑑𝐮 = 𝑐𝑑 𝐮
10. 1𝐮 = 𝐮
▪ Using these axioms, we can show that the zero vector
in Axiom 4 is unique, and the vector −𝐮, called the
negative of u, in Axiom 5 is unique for each u in V.
SUBSPACES

▪ Definition: A subspace of a vector space V is a subset


H of V that has three properties:
a. The zero vector of V is in H.
b. H is closed under vector addition. That is, for each u
and v in H, the sum 𝐮 + 𝐯 is in H.
c. H is closed under multiplication by scalars. That is, for
each u in H and each scalar c, the vector cu is in H.
Slide 4.1- 4
SUBSPACES
▪ Properties (a), (b), and (c) guarantee that a subspace H of V is
itself a vector space, under the vector space operations already
defined in V.

▪ Every subspace is a vector space.


▪ Conversely, every vector space is a subspace (of itself and
possibly of other larger spaces).

▪ The set consisting of only the zero vector in a vector space V is


a subspace of V, called the zero subspace and written as {0}.

▪ As the term linear combination refers to any sum of scalar


multiples of vectors, and Span{v1,…,vp} denotes the set of all
vectors that can be written as linear combinations of v1,…,vp.
A SUBSPACE SPANNED BY A SET

▪ Theorem 1: If v1,…,vp are in a vector space V, then


Span {v1,…,vp} is a subspace of V.

▪ We call Span{v1,…,vp} the subspace spanned (or


generated) by {v1,…,vp}.

▪ Give any subspace H of V, a spanning (or


generating) set for H is a set {v1,…,vp} in H such
that
𝐻 = Span{𝐯1 , … , v𝑝 }
Finding a Base
Orthonormal vectors

► set of n-vectors a1, . . . ,ak are (mutually) orthogonal if ai ⊥ aj for i ≠ j

► they are normalized if ǁai ǁ = 1 for i = 1,. . . ,k

► they are orthonormal if both hold

► can be expressed using inner products as


(
1 i=j
aTi a j =
0 i≠ j

► orthonormal sets of vectors are linearly independent

► by independence-dimension inequality, must have k ≤ n

► when k = n, a1 ,. . . ,an are an orthonormal basis


Examples of orthonormal bases

► standard unit n-vectors e1 ,. . . ,en

► the 3-vectors

0 1 1 1
1
0 , √ 1 , √ −1
− 2 0 2 0
1
► the 2-vectors shown below
ORTHONORMAL SETS

Show that {v1, v2, v3} is an orthonormal basis of ℝ3 , where


3 / 11   −1/ 6   −1/ 66 
     
v1 = 1/ 11  , v 2 =  2 / 6  , v3 =  −4 / 66 
     
1/ 11   1/ 6   7 / 66 
Solution: Compute

v1  v 2 = −3 / 66 + 2 / 66 + 1 / 66 = 0
v1  v3 = −3 / 726 − 4 / 726 + 7 / 726 = 0
v 2  v3 = 1 / 396 − 8 / 396 + 7 / 396 = 0
Orthogonal Matrix

► n-
orthogonal matrix is a square matrix with real entries whose columns and
rows are orthogonal unit vectors (orthonormal vectors). In other words, a
matrix is orthogonal if its transpose is equal to its inverse.

QT=Q-1 QTQ=I
Gram–Schmidt algorithm

given n-vectors a1 ,. . . ,ak


for i = 1,. . . ,k
1. Orthogonalization: q˜i = a i − (qT1 ai )q1 − ···− Ti−
(qai)q i− 1
2. Test for linear dependence: if q˜i = 0 , quit 1
3. Normalization: qi = q̃i / ǁq̃i ǁ

► if G–S does not stop early (in step 2), a1, . . . ,ak are linearly independent

► if G–S stops early in iteration i = j, then aj is a linear combination of


a1, . . . ,a j−1 (so a1, . . . ,ak are linearly dependent)
Example
Example
Apply the Gram Schmidt process {(1, -1, 1), (1, 0, 1), (1, 1, 2)}
Example
THE COORDINATE MAPPING
THE UNIQUE REPRESENTATION THEOREM

▪ Theorem: Let B = {𝐛1 , . . . , 𝐛𝑛 } be a basis for vector


space V. Then for each x in V, there exists a unique
set of scalars c1, …, cn such that
𝐱 = 𝑐1 𝐛1 +. . . +𝑐𝑛 𝐛𝑛
COORDINATES IN ℝ𝑛
▪ Definition: Suppose B = {𝐛1 , . . . , 𝐛𝑛 } is a basis for V
and x is in V. The coordinates of x relative to the
basis B (or the B-coordinate of x) are the weights
c1, …, cn such that 𝐱 = 𝑐1 𝐛1 +. . . +𝑐𝑛 𝐛𝑛 .

▪ If c1, …, cn are the B-coordinates of x, then the vector


𝑐1
in ℝ𝑛 , [x]B = ⋮ is the coordinate vector of x
𝑐𝑛
(relative to B), or the B-coordinate vector of x.
▪ The mapping 𝐱 ↦ 𝐱 B is the coordinate mapping
(determined by B).
COORDINATES IN ℝ𝑛
▪ When a basis B for ℝ𝑛 is fixed, the B-coordinate
vector of a specified x is easily found, as in the
example below.
2 −1 4
▪ Example 1: Let 𝐛1 = , 𝐛2 = ,x = , and
1 1 5
B = {𝐛1 , 𝐛2 }. Find the coordinate vector [x]B of x
relative to B.
▪ Solution: The B-coordinate c1, c2 of x satisfy
2 −1 4
𝑐1 + 𝑐2 =
1 1 5
b1 b2 x
COORDINATES IN ℝ𝑛
2 −1 𝑐1 4
or =
1 1 𝑐2 5
b1 b2 x
▪ This equation can be solved by row operations on an
augmented matrix or by using the inverse of the
matrix on the left.
▪ In any case, the solution is 𝑐1 = 3,
𝑐2 = 2. Thus 𝐱 = 3𝐛1 + 2𝐛2 and
𝑐1 3
𝐱B= 𝑐 =
2 2
▪ The matrix in (3) changes the B-coordinates of a
vector x into the standard coordinates for x.
COORDINATES IN ℝ𝑛
▪ An analogous change of coordinates can be carried
out in ℝ𝑛 for a basis B = {𝐛1 , . . . , 𝐛𝑛 }.
▪ Let PB= 𝐛1 𝐛2 ⋯ 𝐛𝑛
▪ Then the vector equation
𝐱 = 𝑐1 𝐛1 + 𝑐2 𝐛2 . . . +𝑐𝑛 𝐛𝑛
is equivalent to
𝐱 = 𝑃B 𝐱 B
▪ PB is called the change-of-coordinates matrix from B to
the standard basis in ℝ𝑛 .
▪ Left-multiplication by PB transforms the coordinate vector
[x]B into x.
THE COORDINATE MAPPING
3 −1 3
▪ Example : Let 𝐯1 = 6 , 𝐯2 = 0 , 𝐱 = 12 ,
2 1 7
and B= {𝐯1 , 𝐯2 }. Then B is a basis for 𝐻 = Span{𝐯1 ,v2 }.
Determine if x is in H, and if it is, find the coordinate
vector of x relative to B.
THE COORDINATE MAPPING
▪ Solution: If x is in H, then the following vector equation is
consistent:
3 −1 3
𝑐1 6 + 𝑐2 0 = 12
2 1 7
▪ The scalars c1 and c2, if they exist, are the B-coordinates of x.
▪ Using row operations, we obtain
3 −1 3 1 0 2
6 0 12 ∼ 0 1 3
2 1 7 0 0 0
2
▪ Thus 𝑐1 = 2, 𝑐2 = 3 and [x]B = .
3
▪ The coordinate system on H
determined by B is shown in the following figure.
CHANGE OF BASIS
▪ Theorem 15: Let B = {b1, . . . , bn} and  = {c1, . . . , cn}
for a vector space V . Then there is a unique 𝑛 × 𝑛
𝑃
matrix ՚B such that
𝑃
𝐱  = ՚ B 𝐱 B
𝑃
The columns of matrix  ՚ B are the -coordinate
vectors of the vectors in the basis B. That is,
𝑃
՚B = 𝐛1  𝐛2  ... 𝐛𝑛 
CHANGE OF BASIS
𝑃
▪ The matrix ՚B in Theorem is called the change-of-
coordinates matrix from B to C. Multiplication by
𝑃
՚B converts B -coordinates into C-coordinates.
▪ Figure 2 below illustrates the change-of-
coordinates equation.
CHANGE OF BASIS IN ℝ𝑛
−9 −5 1
▪ Example 1. Let 𝐛1 = , 𝐛2 = , 𝐜1 = , 𝐜2 =
1 −1 −4
3
and consider the bases for ℝ𝑛 given by B = {b1, b2}
−5
and  = {c1, c2}. Find the change-of-coordinates matrix
from B to C.
𝑃
▪ Solution The matrix  ՚ B involves the  -coordinate
𝑥1
vectors of b1 and b2. Let 𝐛1  = and 𝐛2  =
𝑥2
𝑦1
𝑦 .Then, by definition,
2
𝑥1 𝑦1
[𝐜1 𝐜2] = b1 and [𝐜1 𝐜2] 𝑦 = b2
𝑥2 2
CHANGE OF BASIS IN ℝ𝑛
▪ To solve both systems simultaneously, augment the
coefficient matrix with b1 and b2, and row reduce:
1 3 −9 −5 1 0 6 4
𝐜1 𝐜2 ⋮ 𝐛1 𝐛2 = ⋮ ~ ⋮
−4 − 5 1 −1 0 1 −5 −3
▪ Thus
6 4
𝐛1  = and 𝐛2  =
−5 −3
▪ The desired change-of-coordinates matrix is therefore
𝑃 6 4
 ՚ B = 𝐛1  𝐛2  =
−5 −3
CHANGE OF BASIS
7 −3 1
▪ Example 2. Let 𝐛1 = , 𝐛2 = , 𝐜1 = , 𝐜2 =
5 −1 −5
−2
and consider the bases for ℝ𝑛 given by B = {b1, b2}
2
2
and  = {c1, c2}. Find 𝑥  given 𝒙 B=
1
2 7 −3 11
▪ Solution 1 𝒙 B= 𝑥 = 2 +1 = .
1 5 −1 9
𝑥1
▪ Suppose that 𝒙 = .Then, by definition,
𝑥2
𝑥1 11
𝑥1𝐜1 + 𝐱𝟐𝐜2 = x  [𝐜1 𝐜2] =
𝑥2 9
CHANGE OF BASIS
𝑃
Solution 2 𝐱  = ՚ B 𝐱 B

𝑃 7 1 −2
՚ B ▪ =a +b ,
5 −5 2
−3 1 −2
▪ =c +d
−1 −5 2

You might also like