Download as pdf or txt
Download as pdf or txt
You are on page 1of 40

Convention.

We introduce a coordinate system in 3-dimensional space by


choosing an origin O and three pairwise perpendicular lines through the
origin called x-axis, y -axis, and z-axis.
We identify each point P in 3-dimensional space with a vector
v = (x, y , z) ∈ R3 consisting of the coordinates of P. We sometimes say
P has vector v , and v has point P

We also identify the point P with an arrow OP from the origin O to the
point P.
Convention. We introduce a coordinate system in 3-dimensional space by
choosing an origin O and three pairwise perpendicular lines through the
origin called x-axis, y -axis, and z-axis.
We identify each point P in 3-dimensional space with a vector
v = (x, y , z) ∈ R3 consisting of the coordinates of P. We sometimes say
P has vector v , and v has point P

We also identify the point P with an arrow OP from the origin O to the
point P.
Theorem (Pythagoras). Let ABC be a right-angled triangle with
hypotenuse c and short sides a and b. Then

a2 + b 2 = c 2 .
Definition. Let P be a point with vector v . The length of v is denoted by
kv k and defined as the distance between P and the origin.

Theorem. Let v = (x, y , z) ∈ R3 be a vector. Then


p
1. kv k = x 2 + y 2 + z 3
2. kv k = 0 if and only if v = (0, 0, 0).
3. kav k = |a| kv k for all numbers a ∈ R, where |a| is the absolute
value of a.
Definition. We say two nonzero vectors u, v ∈ R3 are parallel if u = av
for some a ∈ R.
If u = av for some a > 0 we say that u and v have the same direction.
If u = av for some a < 0 we say that u and v have opposite direction.

Theorem. Two nonzero vectors u, v ∈ R3 are equal if and only if they


have the same direction and the same length.
Definition. Let A and B be two points in space. The arrow from A to B

is called the geometric vector from A to B, and is denoted by AB. We
→ →
call A the tail of AB, and B the tip of AB.
→ →
The length of AB is denoted by kABk, and is defined as the distance
between A and B.

Remark. Different geometric vectors can have the same length and
direction. Therefore different geometric vectors can represent the same
vector v ∈ R3 in the same way that different fractions may represent the
same rational number ( 12 = 24 ).
→ →
Theorem (Parallelogram Law). Let u = AP and v = AQ be geometric
vectors with the same tail. Let B be the point such that AQBP is a
parallelogram.
→ →
Then u + v = AB is a diagonal of the parallelogram, and u − v = QP is
the other diagonal of the parallelogram.

→ → → → → →
Equivalently, AP + PB = AB and AP − AQ = QP for all points A, B, P
and Q.
Theorem. The diagonals of a parallelogram bisect each other.

Theorem. Let P = (x1 , y1 , z1 ) and B = (x2 , y2 , z2 ) be two points. Then


the distance between A and B is given by
→ p
kABk = (x1 − x2 )2 + (y1 − y2 )2 + (z1 − z2 )2
Definition. A function f : Rn → Rm is called linear if it satisfies the
following two properties for all x, y ∈ Rn and all a ∈ R.
1. f (x + y ) = f (x) + f (y )
2. f (ax) = af (x)

Theorem. Let f : Rn → Rm be a linear function. Then

f (a1 x1 + · · · + ak xk ) = a1 f (x1 ) + · · · + ak f (xk )

for all vectors x1 , . . . , xk ∈ Rn and all a1 , . . . , ak ∈ R.


Theorem. Let f : Rn → Rm be a function.

1. Then f is linear if and only if f is a matrix transformation.


2. If f is linear, then f = TA is equal to the matrix transformation
induced by the m × n matrix A with columns f (e1 ), . . . , f (en ), where
e1 , . . . , en denote the columns of the identity matrix In .
Definition. Let Rθ : R2 → R2 denote the rotation about the origin
through the angle θ in counterclockwise direction.

Theorem. The function Rθ : R2 → R2 is linear and equal to the matrix


transformation induced by the matrix
 
cos θ − sin θ
.
sin θ cos θ

Corollary. We have

cos(θ + ϕ) = cos θ cos ϕ − sin θ sin ϕ,


sin(θ + ϕ) = sin θ cos ϕ + cos θ sin ϕ.
Definition. Let Qm : R2 → R2 denote the reflection in the line with slope
m through the origin.

Theorem. The function Qm : R2 → R2 is linear and equal to the matrix


transformation induced by the matrix

1 − m2
 
1 2m
.
1 + m2 2m m2 − 1
Definition. Let Pm : R2 → R2 denote the projection onto the line with
slope m through the origin.

Theorem. The function Pm : R2 → R2 is linear and equal to the matrix


transformation induced by the matrix
 
1 1 m
.
1 + m2 m m2
Definition. A subset U ⊆ Rn of vectors is called subspace of Rn if U
satisfies the following three properties:
1. U contains the zero vector 0 ∈ Rn .
2. u + v ∈ U for all u, v ∈ U.
3. av ∈ U for all v ∈ U and all a ∈ R.
Definition. Let P and Q be two points such that P 6= Q. The line
through the points P and Q is the set of points

L = {P + t PQ : t ∈ R}.

Let L be a line.

A nonzero vector v is called direction vector of L if v is
parallel to AB for some points A and B on the line L.

Proposition. Let L be a line in R3 that contains the origin. Then L is a


subspace of R3 .
Definition. Let A, B and C be three points that do not lie on a line. The
plane through the points A, B and C is the set of points
→ →
E = {A + s AB + t AC : s, t ∈ R}.

Proposition. Let E be a plane in R3 that contains the origin. Then E is a


subspace of R3 .

Remark. A subspace U of R3 is either {0}, or R3 , or a line through the


origin, or a plane through the origin.
Definition. Let A be an m × n matrix.

1. The null space of A is the set of vectors v ∈ Rn such that Av = 0,


where 0 is the zero vector in Rm .
2. The image of A is the set of vectors u ∈ Rm such that there exists a
vector v ∈ Rn with Av = u.

Remark. The null space of A is just the set of solutions to the


homogeneous system of linear equations Ax = 0.
The image of A is the set of vectors u ∈ Rm such that the system of
linear equations Ax = u has a solution.
Theorem. Let A be an m × n matrix. Then
1. The null space of A is a subspace of Rn .
2. The image of A is a subspace of Rm .
Definition. Let x1 , . . . , xk ∈ Rn be vectors. The span of x1 , . . . , xk is
defined as the set of vectors

span{x1 , . . . , xk } = {t1 x1 + · · · + tk xk : t1 , . . . , tk ∈ R}.

That is, span{x1 , . . . , xk } is the set of all linear combinations of x1 , . . . , xk

Theorem. Let x1 , . . . , xk ∈ Rn be vectors.


1. Then span{x1 , . . . , xk } is a subspace of Rn that contains the vectors
x1 , . . . , xk .
2. If the vectors x1 , . . . , xk lie in a subspace U of Rn then

span{x1 , . . . , xk } ⊆ U.
Definition. Let e1 , . . . , en be the columns of the identity matrix In . We
call the set {e1 , . . . , en } the standard basis of Rn .

Proposition. Let {e1 , . . . , en } be the standard basis of Rn .


Then span{e1 , . . . , en } = Rn .
Definition. Let v1 , . . . , vk ∈ Rn be vectors. The set {v1 , . . . , vk } is called
linearly independent if

t1 v1 + · · · + tk vk = 0,

where t1 , . . . , tk ∈ R, implies t1 = · · · = tk = 0. That is, the only way of


writing the zero vector as a linear combination of v1 , . . . , vk is to let all
coefficients be equal to zero.

If {v1 , . . . , vk } is not linearly independent we say this set is linearly


dependent.
Theorem. Let A be an m × n matrix with columns a1 , . . . , an ∈ Rm .
1. Then {a1 , . . . , an } is linearly independent if and only if the
homogeneous system Ax = 0 has only the trivial solution.
2. Rm = span{a1 , . . . , an } if and only if the system Ax = b has a
solution for all b ∈ Rm .

Theorem. Let A be an n × n matrix. Then the following are equivalent.


1. A is invertible.
2. The column of A are linearly independent.
3. The span of the columns of A is Rn .
4. The rows of A are linearly independent.
5. The span of the rows of A is Rn .
Definition. Let U ⊆ Rn be a subspace. A subset B = {v1 , . . . , vk } ⊆ U is
a basis of U if and only if the following two properties are satisfied:
1. U = span B
2. B is linearly independent.

Proposition. The standard basis B = {e1 , . . . , en } is a basis of Rn .


Theorem. Let U be a subspace of Rn , let {v1 , . . . , vk } be a linearly
independent subset of U, and let U = span{u1 , . . . , um }. Then k ≤ m

Theorem. Let U be a subspace of Rn such that U 6= {0}.


1. Then U has a basis.
2. If S ⊆ U is a linearly independent subset, then there exists a basis of
U that contains S.
3. If T ⊆ U is a subset such that U = span T , then there exists a basis
of U that is contained in T .
Theorem. Let U be a subspace of Rn , and let B = {v1 , . . . , vk } and
B 0 = {u1 , . . . , um } be two bases of U. Then k = m.

Definition. Let U be a subspace of Rn , and let B = {v1 , . . . , vk } be a


basis of U. The dimension of U is defined as dim U = k.

Remark. The dimension of a subspace is well-defined because every


subspace U has a basis, and all bases of U have the same size.
Theorem. Let U be a subspace of Rn with dimension dim U = m, and let
S = {v1 , . . . , vm } be a subset of U. Then the following are equivalent:
1. S is a basis of U.
2. S is linearly independent.
3. U = span S.

Theorem. Let U and W be two subspaces of Rn such that U ⊆ W .


1. Then dim U ≤ dim W .
2. If dim U = dim W then U = W .
Definition. Let x = (x1 , . . . , xn ) ∈ Rn be a vector. The length of x is
defined as q
kxk = x12 + · · · + xn2 .

Theorem. Let x, y , z ∈ Rn , and recall that x · y denotes the dot product


of the vectors x and y . Then
1. x · y = y · x
2. x · (y + z) = x · y + x · z
3. (ax) · y = a(x · y ) = x · (ay ) for all a ∈ R.
4. x · x = kxk
5. kxk ≥ 0, and kxk = 0 if and only if x is the zero vector.
6. kaxk = |a| kxk
Theorem (Cauchy Inequality). Let x, y ∈ Rn . Then

|x · y | ≤ kxk ky k .

and |x · y | = kxk ky k if and only if one of the vectors x and y is a scalar


multiple of the other.

Theorem (Triangle Inequality). Let x, y ∈ Rn . Then

kx + y k ≤ kxk ky k .
Theorem (Cosine Law). Let a, b, c be the sides of a triangle, and let θ be
the angle opposite of c. Then

c 2 = a2 + b 2 − 2ab cos θ.

Theorem. Let x, y ∈ R3 be vectors, and let θ be the angle between x and


y . Then
x ·y
cos θ = .
kxk ky k
Definition. Two vectors x, y ∈ Rn are called orthogonal if x · y = 0.
A set of vectors {x1 , . . . , xk } ⊆ Rn is called orthogonal if xi · xj = 0 for all
i 6= j, and if no xi is equal to the zero vector.
A set of vectors {x1 , . . . , xk } ⊆ Rn is called orthonormal if it is
orthogonal and kxi k = 1 for all i.

Proposition. The standard basis {e1 , . . . , en } is an orthonormal basis of


Rn .
Definition. Let E be a plane in R3 . A nonzero vector n ∈ R3 is called

normal vector for the plane E if n is orthogonal to PQ for all points P, Q
on E .

Proposition. Let E be a plane in R3 , let P = (x0 , y0 , z0 ) be a point on E ,


and let n = (a, b, c) be a normal vector for E . Then a point
Q = (x, y , z) ∈ R3 lies on E if and only if Q satisfies the equation

n · (Q − P) = a(x − x0 ) + b(y − y0 ) + c(z − z0 ) = 0

This is called the scalar equation of the plane E .


Definition. The cross product of two vectors u = (x1 , y1 , z1 ) and
v = (x2 , y2 , z2 ) in R3 is defined as
     
x1 x2 y1 z2 − z1 y2
u × v = y1  × y2  = −x1 z2 + z1 x2  .
z1 z2 x1 y2 − y1 x2

Theorem. Let u, v ∈ R3 be nonzero vectors.


1. Then the cross product u × v is orthogonal to both u and v .
2. u × v is the zero vector if and only if u and v are parallel.
Theorem. Let {x1 , . . . , xk } be an orthogonal subset of Rn . Then
2 2 2
kx1 + · · · + xk k = kx1 k + · · · + kxk k .
Theorem. Let S = {x1 , . . . , xk } be an orthogonal subset of Rn . Then S
is linearly independent.

Theorem. Let {x1 , . . . , xm } be an orthogonal basis of a subspace U of


Rm , and let u ∈ U. Then
u · x1 u · x2 u · xm
u= 2 x1 + 2 x2 + ··· + 2 xm .
kx1 k kx2 k kxm k
Definition. Let L be a line in R3 through a point Q with direction vector

v . Let u = QP for some point P ∈ R3 .
Write u = u1 + u2 such that u1 is parallel to v , and u2 is orthogonal to v .
The vector u1 is called orthogonal projection of u on v , and is denoted by
projv (u) = u1 .

Proposition. Let u, v ∈ R3 such that v is not the zero vector. Then


u·v
projv (u) = 2 v.
kv k
Definition. Let A be an m × n matrix.
1. The column space of A, denoted by col(A), is the span of the
columns of A
2. The row space of A, denoted by row(A), is the span of the rows of A.

Definition. Let A be an m × n matrix. An elementary column operation


is one of the following:
1. Exchanging two columns of A.
2. Multiplying a column of A by a nonzero number a ∈ R.
3. Adding a multiple of a column of A to a different column of A.
Lemma. Let A and B be m × n matrices.
1. If B can be obtained from A by elementary row operations, then
row A = row B
2. If B can be obtained from A by elementary column operations, then
col A = col B

Lemma. Let R be an m × n row-echelon matrix.


1. Then the nonzero rows of R are a basis for row R.
2. The columns of R containing the leading 1s are a basis for col R.
Theorem (Rank Theorem). Let A be an m × n matrix. Then

rank(A) = dim(row A) = dim(col A).

Let R be a row-reduced matrix obtained from A by elementary row


operations.
1. Then the nonzero rows of R are a basis for row A.
2. Suppose that the leading 1s of R are contained in columns j1 , . . . , jk
of R. Then the columns j1 , . . . , jk of A are a basis for col A.

Corollary. Let A be a matrix. Then rank A = rank AT .


Theorem. Let A be an m × n matrix. Then the following are equivalent:
1. rank A = n
2. The rows of A span Rm .
3. The columns of A are linearly independent in Rn .
4. The n × n matrix AT A is invertible.
5. There exists an n × m matrix B such that BA = In .
6. The homogeneous system Ax = 0 only has the trivial solution.
Theorem. Let A be an m × n matrix. Then the following are equivalent:
1. rank A = m
2. The columns of A span Rn .
3. The rows of A are linearly independent in Rm .
4. The m × m matrix AAT is invertible.
5. There exists an m × n matrix B such that AB = Im .
6. The system Ax = b is consistent for all b ∈ Rm .

You might also like