Professional Documents
Culture Documents
Original Linear Algebra-@ Richie
Original Linear Algebra-@ Richie
Original Linear Algebra-@ Richie
LINEAR ALGEBRA
BSc
Compiled By
AUGUST, 2019
LINEAR ALGEBRA
MC / EL / CE /RN 169
Email: otoohenry@gmail.com
OFFICE: Mathematical Sciences Department / Phone: 0541414406
GRADING CRITERIA and EVALUATION PROCEDURES: The grade for the course will be
based on homework, a class test and a final exam.
1. Attendance: All students should make it a point to attend classes. Random
attendance will be taken to constitute 10% of the final grade.
2. Homework: Homework is worth 5% of the final grade. Homework will be assigned
on Fridays and will be due at before 8 AM the following Monday.
3. Class test: There shall be unannounced short class tests at any time within the
semester which will constitute 15% of the continuous assessment. Also, one main
test based on theoretical techniques worth 10% of the grade will be given during
class. The exam date will be announced one week in advance.
4. Final exams: A final exam is worth 60% of the final grade.
ASSESSMENT OF COURSE
Assessment of Lecturer
At the end of the course each student will be required to evaluate the course and the
lecturer’s performance by answering a questionnaire specifically prepared to obtain
the views and opinions of the student about the course and lecturer. Please be sincere
and frank.
STUDENT RESPONSIBILITY: It is assumed that each student attends the lectures and works
all the assigned problems. The student is responsible for ALL material covered in class and
any assigned reading. Students may discuss homework problems but you are responsible for
writing your individual answers. If your name does not appear on the final class roll, then you
will not receive a grade for this course.
EXAM POLICY: No makeup for missed work will normally be given, unless extenuating
circumstances occur. Travel plans are not extenuating circumstances. Acceptable medical
excuses must state explicitly that the student should be excused from class.
1.0 Introduction 6
1.1 Complex numbers and the complex plane 7
1.2 Equality of Complex Number 8
1.1.1 Set of Complex Numbers 8
1.3 Addition and Subtraction of complex number 9
1.4 Multiplication of complex numbers 10
1.5 Square root of a complex number 11
1.5.1 Multiplication by I 12
1.6 Conjugate of a complex number 13
1.6.1 Modulus of a complex number 14
1.6.2 Conjugate properties 14
1.7 Division of complex number 15
1.8 The modulus argument and polar form of a complex number 16
1.8.1 Principal value of a argument 18
1.9 De Moivre’s Theorem 21
2.0 Introduction 25
2.1 Important concepts in matrices 26
2.2 Special matrices 30
2.2.1 Addition of matrices 33
2.2.2 Subtraction of matrices 33
2.3 Multiplying a scalar to a matrix 34
2.4 Multiplication of matrices 35
2.5 Determinants 37
2.5.1 Determinants of Matrices of Higher Order 37
2.6 Invertible matrices 39
2.7 Linear Dependence and Independence of matrices 42
2.8 Rank of a Matrix 43
2.9 System of Linear Equation 44
2.9.1 Matrix Representation of a Linear System 45
2.9.2 Gaussian Elimination method 46
2.9.3 Eigenvalues and Eigenvectors 50
3.0 Introduction 59
3.1 Vector Element or Components in a coordinate frame 59
3.2 Basic Properties 60
3.3 Vector Addition and Subtraction 63
3.4 Multiplication of a vector by a scalar 65
3.5 Scalar, Dot, or Inner product 65
3.5.1 Geometrical interpretation of scalar product 66
3.6 Projection of one vector onto the other 68
3.7 Vector or Cross product 70
COMPLEX NUMBERS
Objectives
At the end of this chapter you will be able to:
Understand that i stand for - 1 and be able to reduce powers of i to ± i or ± 1 .
Understand that all complex numbers are in the form (real part )+ i (imaginary part ).
1.0 Introduction
Usually in mathematics the first number system that is encountered is the set of natural
numbers ( ), i.e. 1, 2, 3,
symbol i is called the imaginary unit: i 2 = - 1 . The numbers x and y are called, respectively,
the real part and imaginary part of z and denoted by
x = Âe z and y = Ám z.
The introduction of this new number, i , overcome the problem of finding the square roots of
a negative number.
Answer: 25 25 1 25 i 2 5 i
2
With the existence of the square roots of a negative number, it is possible to find the solutions
of any quadratic equation of the form ax2 bx c 0 using the quadratic formula. If the
discriminant b2 4ac 0 , the equation has no real roots.
b2 b2 4ac
The quadratic formula x gives
2a
62 36 100 6 64
x i.e. x 3 4i and x 3 4i
2 2
Self Check: Solve the following quadratics equation using the quadratic formula.
a. x2 2x 3 0
b. x2 2x 6 0
c. x2 x 1 0
pair (x, y ) of real number and the imaginary number i = - 1 such that z = (x, y ) = x + iy
y1 = y2 .
Examples
1. Given that z1 4 i76 and z2 4 i76 , then z1 z2 since x1 x2 4 y1 y2 76 z1 z2
If z a bi then:
These are sometimes denoted e z for a and m z for b. That is, e a bi a and
m a bi b .
Answer: The complex numbers z and w are shown on the following diagram.
a bi 0 if and only if a b 0
The addition of two complex numbers, z1 and z2 , in general gives another complex number.
The real components and the imaginary components are added separately and in a like
manner to the familiar addition of real numbers:
Example
1. Add 3 + 4i and 2 - i
Add the real parts together to give the real part 5 ( i.e. 3 + 2 = 5)
Add the imaginary parts together to give the imaginary part 3 ( i.e. 4 + (-1) = 3)
So 3 4i 2 i 3 2 4 1 i 5 3i
Example
1. Given that z1 7 i and z2 3 i evaluate z1 z2
Solution:
z1 z2 7 i 3 i 7 3 1 1 i 4 2i
Solution:
z1 z2 (16 i 4) (12 i 4) 16 12 i 4 i 4 4
3 4 3 5i 2i 4 2i 5i
12 15i 8i 10i 2
12 23i 10 Remember that 10i 2 10 1 10
2 23i
Self check
(i) Multiply (4 - 2i) and (2 + 3i)
(ii) Multiply (1 + 2i) and (2 - 3i)
Earlier examples showed that the square roots of a negative real number could be found in
terms of i in the set of complex numbers. It is also possible to find the square roots of any
complex number.
Answer: Let the square root of 5 12i be the complex number a bi so a bi 5 12i
2
Note: the trick is to equate the real parts and the imaginary parts to give two equations, which
can be solved simultaneously.
So a 2 b2 2abi 5 12i
Equate the imaginary parts to give 2ab 12 (call this equation 2).
Hence a 2 9 a 2 4 0 a 2 4 or 9
1.5.1 Multiplication by i
In each of the examples it can be seen that the effect of multiplying a complex number by i is
There are special identities which apply to complex numbers and their conjugates. These are
listed below.
Let the two complex numbers be z and w with conjugates z and w respectively.
1. z z
2. z z where z R
3. z z 2Re z
4. z z
5. z z z
2
6. z w z w
7. zw zw
8. zw z w
Proof (4): z z
Proof (5): z z z
2
But z a 2 b2 so z a 2 b2 , Thus z z a 2 b2 z
2 2
1. Prove properties 6 to 9.
n n
2. Prove the Triangle inequality for n complex numbers zj zj
j 1 j 1
a bi a bi c di ac bd bc ad i
c di c di c di c2 d 2
ac bd bc ad i
c 2
d2 c 2
d2
Example 1
3 7i 3 7i 2 6i 6 14i 18i 42
2 6i 2 6i 2 6i 22 62
36 32
i
40 40
Self check
i. Divide 5 – 2i by 4 + 3i
z1
ii. If z1 = 2 – 3i and z2 = 3 – 4i, find
z2
3 2i 3 2i
iii. Find Re and Im .
1 4i 1 4i
Remember the Argand diagram in which the point (a, b) corresponds to the complex number
z a bi
When the complex number is written as a + bi where a and b are real numbers, this is known
as the Cartesian form.
This point (a, b) can also be specified by giving the distance, r, of the point from the origin
and the angle, , between the line joining the point to the origin and the positive x – axis.
Thus the complex number z can be written as r cos i r sin . This is known as the polar form
of a complex number. r is called the modulus of z and is the argument of z.
Argument of a complex number: The argument of a complex number is the angle between
the positive x-axis and the line representing the complex number on an Argand diagram. It is
denoted arg (z).
Polar form of a complex number: The polar form of a complex number is z r cos i sin
where r is the modulus and is the argument.
There will be times when conversion between these forms is necessary. Given a modulus ( r)
and argument ( ) of a complex number it is easy to find the number in Cartesian form.
Example: If a complex number z has modulus of 2 and argument of , express z in the
6
form a bi and plot the point which represents the number in an Argand diagram.
Answer:
3 1
We have a 2cos 2 3 and b 2sin 2 1 , So a bi 3 i .
6 2 6 2
Self check
11
Plot the complex number with modulus 2 and argument in an Argand diagram.
6
The values of the principal argument for a complex number in each quadrant are shown on
the following diagrams.
b b
arg z where a and tan a , i.e a tan 1
a a
In the previous two examples the complex number is represented by a point in either the first
or second quadrant and the principle argument is positive (lying between 0 and ). In the
next two examples when the complex number is represented in the third and fourth quadrants
the principal argument is negative (lying between 0 and - ).
b b
arg z where a and tan a , i.e a tan 1
a a
b
NB: Taking tan 1 without the modulus sign on a calculator may give a different value than
a
required.
Example 1
Take two complex numbers z r1 cos i sin and w r2 cos i sin and multiply them
together. Thus zw r1r2 cos cos sin sin i r1r2 sin cos sin cos
So the modulus of the product, r1r2 , is the product of the moduli of z and w, namely r1 and r2
The argument of the product, , is the sum of the arguments of z and w, namely and
Answer: 0 0
12 cos 30 60 i sin 30 60 12 cos900 i sin900 12i
z r1 cos i sin
w r2 cos i sin
r1 cos i sin cos i sin
r2 cos i sin cos i sin
r1 cos cos sin sin i sin cos cos sin
r2 cos2 sin 2
cos i sin
r1
r2
then
z 2 = r 2 (cos 2q + i sin 2q)
and
z 3 = zz 2 = r 3 (cos3q + i sin3q)
In general, we obtain the following result, which is named after the French mathematician
Abraham De Moivre.
Theorem: For all rational values of n, if z = r (cos q + i sin q) , then z n = éêër n (cos nq + i sin nq)ùúû
This says that to take the nth power of a complex number we take the nth power of the
modulus and multiply the argument by n.
For n 1
Consider n k 1 then
r cos i sin r cos i sin r cos i sin
k 1 k
r k 1 cos k 1 i sin k 1
So the result is true for n k 1 if it is true for n k . Since it is also true for n 1 , then it is
true for all n N .
The rule of multiplying the moduli and adding the arguments gives
Now consider z 3
Example
5
Calculate 2 cos i sin
5 5
5
5
2 cos i sin 2 cos 5 i sin 5
5 5 5 5
32 cos i sin
32
Example
1
3
Calculate cos i sin
4 4
1
3
1 1
cos i sin cos i sin
4 4 3 4 3 4
cos i sin
12 12
Self check
Calculate
1
3
1. 27 cos i sin
2 2
2
3
2. 8 cos i sin
4 4
Activity One (Basic Operation of Complex Number
2. 3i 2 i
1. 3 9 i 2 1
3 3i 3(1) i
3i
5i 3 2
25i 2 * 3
75
Perform the addition or subtract and write the result in standard form.
4. 4 i 7 2i 5. 11 2i 3 6i
11 i 8 4i
11 2i 3 6i
8 4i
6. 7 18 3 3i 2
7. 13i 14 7i
7 3i 2 3 3i 2
14 20i
4
3 7 5 1
9. i i
4 5 6 6
8. 22 5 8i 10i 3 7 5 1
i
17 18i 4 5 6 6
19 37
i
12 30
3 7 5 1
9. i i
4 5 6 6
3 7 5 1
i
4 5 6 6
19 37
i
12 30
10. 6 * 2
12
2 3
12. 6 2i 2 3i
66 2i 2i2 3i
6 22i
13. 14 i 10 14 i 10
14 i 10
2 2
14 (10)
24
14. 4 3i 4 3i
15. 3 i 2 3 i 2
16 9i 2
9 i2 * 2
16 9
11
25
6 i 4 4 5i
17.
16. *
i i 4 5i 4 5i
6i 16 20i
2
i 16 25i
6i 16 20i
41
8 7i 1 2i 1
18. 19.
1 2i 1 2i 4 5i 2
22 9i 1 41 40i
5 41 40i 41 41i
41 40i
2
41 40 2
41 40i
3281
22. 4i 2 2i 3 23. 5i 5
4(1) 2i (1) 5(1)(1)i
4 2i 5i
24. 2
6
i 2
6
8
5. 2
6. 4 2i
1 i 1 1 2
2 2
8. 1 i
1 i 1 1 2
2 2
9. 2 2i
2 2i 2 2 82 2
2 2
10. 2 2i
3 3
2 2
3 i 3 6
12. 5 i 5
5 5
2 2
5 i 5 10
14. 2 2i 2
2
2
2 2i 2 2 2 10
2
15. 5 5i 5
5
2
5 5i 5 5 5 150 5 6
2
17. 3 2i 2 4i 1 6i
18. 6 i 3 2i 9 3i
1.
2.
Represent the complex number graphically and the trigonometric form of the number
2. 3 i
3. 2 1 i 3
5. 7 4i
7. 2 cos120 i sin120
3 3
9. 3.75 cos i sin
4 4
12.3 cos i sin 4 cos i sin
3 3 6 6
3
13. cos i sin 6 cos i sin
2 6 6 4 4
5
2
14. cos 140 i sin 140 cos 60 i sin 60
3 3
15. cos 5 i sin cos 20 i sin 20
cos 50 i sin 50
2 cos 120 i sin 120
16.
cos 20 i sin 20
17.
4 cos 40 i sin 40
7 7
cos i sin
18. 4 4
cos i sin
18 cos 54 i sin 54
9 cos 20 i sin 20
19.
3 cos 102 i sin 102 20.
5 cos 75 i sin 75
1 i
3
1.
1 i
10
2.
5
3. 2 3 i
3 2i
5
4.
5 5
10
6. cos i sin
4 4
Use the complex Roots Theorem to find the indicated roots of the complex number and then
represent each of the roots graphically, express the roots in standard form
4 4
9. 16 cos i sin
3 3
11 Fifth roots i
12 Fifth roots 1
1. x4 i 0
2. x5 243 0
3. x3 64i 0
i4
1. Compute the real and imaginary part of z
2i 3
2. Compute the absolute value and the conjugate of
z 1 i w i17
6
4. Write in the “trigonometric” form cos i sin the following complex numbers
7
a).8 b.6i c. cos i sin
3 3
5. Siplify
1 i 3i
a) 1 2i 2 2i
1 i 1 i
b. 2ii 1 3 i 1 i1 i
3
a) z iz 1 b. z 2 * z z c. z 3i 3 z
Question 2
Find the value of x and the value of y in the equation, Given further that x , y
x iy 2 i 3 i
Answer x, y 1,1
Question 4
4i
z , .
1 i
Answer 2
Question 5
x1 i y2 i 3 10 x x , y
2 2
Answer
x 7, y 1
Question 6
x, y 7 , 24
25 25
Question 7
z 3 4i and zw 14 2i
By showing clear workings, find…….
a) w in the form a bi , where a and b are real numbers.
b)The modulus and the argument of w
Question 8
z
z 22 4i and 6 8i
w
Answer
w 1 2i w 5 arg w 1.11c
Question 9
7 4i
z 2 i 8
2
2i
Answer= 3 7i
Question 10
z 2 5i
Question 11
z 2 21 20i, z C.
ANSWER
Z 5 2i
Question 12
2 z 3 5z 2 cz 5 0 c has a solution 0f z 1 2i
z 8 i(7 2 z), z C.
Determine the value of z in the above equation, given your answer in the form
x iy where x and y are real numbers.
MATRIX ALGEBRA
Objectives
At the end of this chapter you will be able to:
State the conditions for the equality of two matrices.
Types of matrices.
Addition, Subtraction and Multiplication of matrices.
Multiplying a scalar to a matrix.
Finding the determinant of a matrix.
Solving systems of linear equations.
Computing the eigenvalues and eigenvectors of a matrix.
2.0 Introduction
This chapter investigates matrices and algebraic operations defined on them. These matrices
may be viewed as rectangular arrays of elements where each entry depends on two subscripts
(as compared with vectors, where each entry depended on only one subscript). Systems of
linear equations and their solutions may be efficiently investigated using the language of
matrices.
DEFINITION
(Matrix) A rectangular array of numbers is called a matrix. We shall mostly be concerned
with matrices having real numbers as entries. The horizontal arrays of a matrix are called its
ROWS and the vertical arrays are called its COLUMNS. A matrix having -rows and -
columns is said to have the order m n
where aij is the entry at the intersection of the i th row and j th column.
In a more concise manner, we also denote the matrix by aij by suppressing its order.
1 3 7
Let A , then a11 1, a12 3, a13 7, a21 4, a22 5 and a23 6
4 5 6
A matrix having only one column is called a COLUMN VECTOR; and a matrix with only one
row is called a ROW VECTOR.
Example 1
Give the size of the matrices below
Solution
4 3 0 6 1 0
0 2 4 7 1 3
1
6 1 15 1 0
2
Size 3*6
7 10 1
8 0 2
9 3 0
Size 3*3
In this matrix the number of rows is equal to the number of columns. Matrices that have the same number
of rows as columns are called Squared Matrices.
3 1 12 0 9
This matrix has a single Row and often called Row Matrix.
2
Often when dealing with 1*1 matrix we drop the surrounding bracket and just write -2
2.1 IMPORTANT CONCEPTS IN MATRICES
Principal Diagonal
The diagonal entries containing the elements a11 , a22 ,K , ann of a square matrix of order n, is
Example
3 88 5
A 20 15 69 The principal diagonal of A is given by a11 3, a22 15, a33 23
6 69 23
In other words, two matrices are said to be equal if they have the same order and their
corresponding entries are equal.
10 8 5 10 8 5
(i ) A 2 5 6 B 2 5 6
6 2 23 6 2 23
A B since both have the same order 3 3 and their individual entries are equal i.e.
a11 b11 10, a12 b12 8, a13 b13 5, a21 b21 2, a22 b22 5, a23 b23 6
a31 b31 6, a32 b32 2, a33 b33 23
10 8 5 10 8 5
(ii) A 2 15 6 B 2 5 6
6 2 23 6 2 23
In the above matrix, A B even though both matrices have the same order. Here their entries
are not the same i.e. a22 b22 since a22 15 b22 5
Sub – Matrix
Any matrix obtained by omitting some row and columns from a given matrix A is called a
sub-matrix of A.
a11 a12 a13
a21 a22 a23
11 12
A A
A a31 a32
A21 A22
a33
a41 a42 a43
a51 a52 a53
Example:
10 8 5
15 6
Given that A 2 15 6 then C is a sub-matrix of A
6 2 23 2 23
A matrix in which each entry is zero is called a zero-matrix, denoted by 0. For example,
0 0 0 0 0
0 2 2 and 0 23 0 0 0
0 0
Nilpotent Matrix
A nilpotent matrix is a square matrix N such that N k 0 , for some positive integer k. The
smallest k is sometimes called the degree of N.
Example:
0 1
The matrix M , is nilpotent, since M 2 0 . Thus the degree of matrix M 2
0 0
Square Matrices
A square matrix is a matrix which has the same number of rows and columns. A n by n
matrix is known as a square matrix of order n.
10 8 5
A 2 15 6 is an example of a square matrix of order 3 3
6 2 23
Any two square matrices of the same order can be added and multiplied. A square matrix A
is called invertible or non-singular if there exists a matrix B such that
AB I n
2 1
1 1 3 3
Given that A has an inverse matrix B 1 then
1 2 1
3 3
2 1
AB I n 1 1 3 3 1 0
1 0
where I
1 2 1 1 0 1 0 1
3 3
Identity Matrix
The identity matrix or unit matrix of size n is the n by n square matrix with ones on the
main diagonal and zeros elsewhere. It is denoted by I n , or simply by I if the size is immaterial
1 0 0
1 0 0 0 1
1 0 0
I1 1 , I 2 , I 3 0 1 0 , , In
0 1
0 0 1
0 0 1
The important property of matrix multiplication of identity matrix is that for m by n matrix
A.
I m A AI n A.
Example:
1 4 7
A 2 6 1 , Show that I m A AI n A.
If
3 2 4
Solution
Since A is a square matrix with order 3X 3 the expression becomes
I3 A AI3 A
1 4 7 1 4 7 1 4 7
2 6 1 2 6 1 2 6 1
3 2 4 3 2 4 3 2 4
Transpose of a Matrix
The transposed matrix AT or A¢ of a matrix A is defined to be the matrix which has rows
identical with the columns of A. Thus if A = (ai j ) then AT = (a j i ) . In short AT is obtained by
Example 1:
1 4 7 1 2 3
Given that A 2 6 1 then AT 4 6 2
3 2 4 7 1 4
Example 2:
8 5 9 3 1 9
T
(i ) (A + B) = AT + BT
T
(ii ) é(A)T ù = A
êë úû
T
(iii ) (kA) = kAT where k is a scalar.
T
(iv) (AB) = BT AT
Observe in (iv) that the transpose of a product is the product of transposes, but in the reverse
order.
Conjugate Transpose
matrix A* obtained from A by taking the transpose and then taking the complex conjugate of
each entry (i.e. negating their imaginary parts but not their real parts). The conjugate
transpose is formally defined by
A
*
ij
Aji
T
A* A AT
where AT denotes the transpose and A denotes the matrix with complex conjugated entries.
3 i 5 3 i 2 2i
If A then A*
2 2i i 5 i
Normal Matrix
A* A AA*
where A* is the conjugate transpose of A. That is, a matrix is normal if it commutes with its
conjugate transpose.
Symmetric Matrix
A AT
The entries of a symmetric matrix are symmetric with respect to the main diagonal (top left to
bottom right). So if the entries are written as A aij , then aij a ji for all indices i and j. The
1 2 3
2 4 5
3 5 6
Skew-Symmetric Matrix
or in component form, if A aij ; then aij a ji for all i and j. For example, the following
matrix is skew-symmetric:
0 2 1
2 0 4
1 4 0
We have seen that a matrix is a block of entries or two dimensional data. The size of the
matrix is given by the number of rows and the number of columns. If the two numbers are
the same, we called such matrix a square matrix.
To square matrices we associate what we call the main diagonal (in short the diagonal).
Indeed, consider the matrix
a b
A
c d
Its diagonal is given by the numbers a and d. For the matrix
a b c
A d e f
g h k
its diagonal consists of a, e, and k. In general, if A is a square matrix of order n and if aij is
the number in the i th - row and j th - column, then the diagonal is given by the numbers aii ,
for i 1, ,n .
The diagonal of a square matrix helps define two type of matrices: upper – triangular and
lower – triangular. Indeed, the diagonal subdivides the matrix into two blocks: one above the
diagonal and the other one below it. If the lower-block consists of zeros, we call such a matrix
a b e
a b
and 0 e f
0 d 0 0 k
are upper-triangular, while the matrices
a 0 0
a 0
and d e 0
c d g h k
are lower-triangular. Now consider the two matrices
a 0 0 a d g
A d e 0 and B 0 e h.
g h k 0 0 k
The matrices A and B are triangular.
A diagonal matrix is a symmetric matrix with all of its entries equal to zero except may be the
ones on the diagonal. So a diagonal matrix has at most n different numbers. For example,
the matrices
a 0 0
a 0
and 0 0 0
0 b 0 0 b
are diagonal matrices. Identity matrices are examples of diagonal matrices.
a 0
A
0 b
Define the power-matrices of A by
A0 I 2 , A1 A, A2 AA, A3 AAA etc.
Answer. We have
a 0 a 0 a 2 0
A
2
2
.
0 b 0 b 0 b
and
a 2 0 a 0 a3 0
A3 A2 A 2 3
.
0 b 0 b 0 b
By induction, one may easily show that
an 0
An .
0 bn
2.2.1 Addition of Matrices: In order to add two matrices, we add the entries one by one.
Note: Matrices involved in the addition operation must have the same size.
So how do we add matrices? The answer is to add entries one by one. For example, we have
a b c a b c
.
d e f d e f
Clearly, if you want to double a matrix, it is enough to add the matrix to itself. So we have
double of which implies
a b c a b c aa bb c c 2a 2b 2c
.
d e f d e f d d ee f f 2d 2e 2f
Example
1 4 7 6 9 6
Given that A 2 6 1 and B 5 6 4 . Evaluate A B
3 2 4 2 2 3
1 4 7 6 9 6 7 13 13
A B 2 6 1 5 6 4 7 12 5
3 2 4 2 2 3 5 4 7
1. A B B A (commutativity).
2. A B C A B C (associativity).
3. k A k A.
4. k A kA A
5. A A A A 0
6. A 0 A
2.2.2 Subtraction of matrices: If M and N are two matrices, then we will/can write
M N M 1 N , we subtract the entries one by one.
Note: Matrices involved in the subtraction operation must have the same size.
So how do we subtract matrices? The answer is to subtract entries one by one. For example,
we have
a b c a b c
.
d e f d e f
Clearly, if you want to get a zero matrix, it is enough to subtract the matrix from itself. So we
have
a b c a b c aa b b c c 0 0 0
.
d e f d e f d d ee f f 0 0 0
a
X b and Y .
c
The scalar product of X and Y is defined by
X Y a b c a b c .
T
Example 1
For the following matrices perform the following operation if possible.
2 0 2
5
2 0 3 2 0 4 7 2 C 4 9
A B
1 8 10 5 12 3 7 9 6 0 6
a A B
b B C
c A C
Solution
a. Both A and B are of the same size and so we know the addition can be done in this case. Once we
know addition can be done there isn’t much to do than to just add.
2 4 10 4
A B
11 11 17 4
b. Again since A and B are of the same size we can do the difference as the previous one, there
isn’t really much to do. All we need to be careful with is the order. Just like with real number
arithmetic B-C is different from A-B. So in this case we will subtract the entries of A from entries
2 4 4 0
B A
of B 13 5 3 14
c. In this case because A and C are different sizes the addition cannot be done likewise, A-C,C-
A,B+C,C-B and B-C cannot be done for the same reason.
2.3 Multiplying a Scalar to a Matrix.
Let A aij be an m n matrix. Then for any element k R we define kA kaij .
In order to multiply a matrix by a number, you multiply every entry by the given number.
Keep in mind that we always write numbers to the left and matrices to the right (in the case
of multiplication).
a b c 2a 2b 2c
2
d e f 2d 2e 2f
Example 1 Given that
0 9 8 1 2 3
A 2 3 B 7 0 C 2 5
1 1 4 1 10 6
1
Compute 3 A 2 B
C2
SOLUTION
So we are really being asked to compute a linear combination here. We will do that by first computing the
scalar multiplies and the perform the addition and subtraction. Note as well that in the case of the third
1
scalar multiple we are going to consider the scalar to be a positive and leave the miunus sign out in
2
front of the mtrix. Here is the work for the problem.
3 55
0 27 16 2 1 2 15 2
1
3 A 2 B 6 9 14 0 1 5 7 23
2 2 2
3 3 8 3 5 3 0 4
1 2 1
1 2 3
For example, if A and B 0 0 3 then
2 4 1 1 0 4
1 0 3 200 1 6 12 4 2 19
AB
2 0 1 400 2 12 4 3 4 18
fact, the general rule says that in order to perform the multiplication AB, where A is a m n
matrix and B is a k 1 matrix, then we must have n k . The result will be a m 1 matrix.
a b c a b c
d e f d e f
Remember that though we were able to perform the above multiplication, it is not possible to
perform the multiplication
a b c
d .
e f
2 0 1 1
AB 1 1 BA .
0 0
1. Let A, B and C be three matrices. If you can perform the appropriate products, then
we have
A B C AC BC
and
A B C AB AC
A B A B
and
A A A
0 1 2
A , B , and C 0 1 5
1 0 1
EXAMPLE 5
Compute AC and CA for the following two matrices. If possible
8 5 3
3 10 2
1 3 0 4
A C
2 5 8 9 2 0 4
1 7 5
SOLUTION
Okay lets first do AC . Here are the sizes for A and C
AC= AC
If we want the entry in the first row and second column AC we will multiply the first row of A by the
second column of B as follows,
(1)(5)+(-3)(10)+(0)(0)+(4)(-7)=-53
Okay, at this point lets stop and insert these into the product so we can make sure that we’ve got our
bearings. Here is the product so far
8 5 3
1 3 0 4 3 10 2 13 53
2 5 8 9 2 0 4
1 7 5
As we can see we’ve got four entries left to compute. For these we will give row and column
multiplications but leave it to you to make sure we used the correct row/column and put the result in
the correct place her is the remaining work.
8 5 3
1 3 0 4 3 10 2 13 53 17
2 5 8 9 2 0 4 56 23 81
1 7 5
C A CA
4*3 2*3 N / A
Okay in this case the two inner numbers (3 and 2) are not the same and so this product can’t be done.
3 1 7 1 4 9
B 10 1 8 D 6 2 1
5 2 4 7 4 7
SOLUTION
First notice that both of these matrices are 3*3 matrices and so both BD and DB are defined.
Again its worth pointing out that this example differs from the previous example in that both the
products are defined in this example rather than only being defined as in the previous example. Also
know that in both case the product will be a new 3*3 matrix. In this example we are going to leave the
work of verifying the products to you it is good practice so you should try and verify at least one of the
following products.
3 1 7 1 4 9 40 38 77
BD = 10 1 8 6 2 1 = 60 10 33
5 2 4 7 4 7 45 0 19
2.5 Determinants
For any square matrix of order 2, we have found a necessary and sufficient condition for
invertibility. Indeed, consider the matrix
a b
A
c d
a b a b a b
determinant of det ad bc
c d c d c d
The matrix A is invertible if and only if ad bc 0 . We called this number the determinant
of A. It is clear from this that we would like to have a similar result for bigger matrices (meaning
higher orders). So is there a similar notion of determinant for any square matrix, which
determines whether a square matrix is invertible or not?
These occur in systems of three linear equations with three unknowns x1 , x2 and x3 . The
and one column of D is called the minor of the elements that belong to the deleted row and
column.
Cofactors: The cofactors of the elements in D in the i-th row and j-th column are defined as
æ+ - +÷ö
çç ÷
checkerboard pattern çç- + ÷
- ÷ . We may write D = a11C11 + a21C21 + a31C31, where C11 is
çç ÷
÷
+÷÷
èç+ - ø
Example. Evaluate
3 2 1
2 1 3 .
4 0 1
We will use the general formula along the third row. We have
3 2 1
2 1 3 1 3 2
2 1 3 4 0 1 4 6 1 1 3 4 29 .
1 3 2 3 2 1
4 0 1
Which technique to evaluate a determinant is easier ? The answer depends on the person
who is evaluating the determinant. Some like the elementary row operations and some like
the general formula. All that matters is to get the correct answer.
1. Any matrix A and its transpose have the same determinant, meaning
det A det AT
3. If we interchange two rows, the determinant of the new matrix is the opposite of the
old one, that is
a b c d
.
c d a b
4. If we multiply one row with a constant, the determinant of the new matrix is the
determinant of the old one multiplied by the constant, that is
a b a b a b
.
c d c d c d
In particular, if all the entries in one row are zero, then the determinant is zero.
5. If we add one row to another one multiplied by a constant, the determinant of the new
matrix is the same as the old one, that is
a c b d a b a b
.
c d c d c a d b
Note that whenever you want to replace a row by something (through elementary
operations), do not multiply the row itself by a constant. Otherwise, you will easily
make errors (due to Property 4).
6. We have
det AB det A det B .
det A1
1
det A
Invertible matrices are very important in many areas of science. For example, decrypting a
coded message uses invertible matrices
AB BA I n
where In is the identity matrix. The matrix B is called the inverse matrix of A. The inverse of
an n by n matrix is another n by n matrix. If the first matrix is A, its inverse is written A1 (and
pronounced "A inverse").
Example. Let
2 3 1 3 2
A and B
2 2 1 1
Solution
2 3 1 3 2 1 0
AB BA I 2 . Hence A is invertible and B is its inverse.
2 2 1 1 0 1
Notation. A common notation for the inverse of a matrix A is A1 . So AA1 A1 A I n .
Method 1
1 1
A .
1 2
Solution:
2 1
3 3
Hence A1
1 1
3 3
The inverse matrix is unique when it exists. So if A is invertible, then A1 is also invertible and
A
1 1
A.
Method 2
Thus,
1
A1 adj A
A
Example: Given
a b
A
c d
Solution:
d c d b
T
which gives
1 d b
A1 .
ad bc c a
there exist n scalars a 1 , a 2 ,K , a n , not all equal to zero, such that a 1v1 + a 2v2 + K + a nvn = 0 .
When no such scalars exist such that the above relationship is only true when
a 1 = a 2 = K = a n = 0 , then N matrix is said to be linearly independent.
if A ¹ 0 , then the rows and columns are of matrix A are said to be linearly independent.
é1 4 3ù
ê ú
Example. Consider the matrix A = ê- 2 18 7ú where A = 0 . This implies linear
ê ú
ê 4 - 6 1ú
ë û
dependence exists between either rows or columns of A. Again the rows or columns are
linearly dependent because c2 = 2(c3 - c1 ) where ci denotes column i. Consider another
é1 1 0ù
ê ú
example, B = ê3 2 1úwhere B = - 3 . So the rows or columns of B are linearly
ê ú
ê1 1 3ú
ë û
independent.
é L ù
ê ú
A = êv1 v2 L vn ú
ê ú
ê¯ ¯ L ¯ú
ë û
independent vectors in the set v1 , v2 ,K , vn and equals the dimension of the vector space
Secondly, A second (equivalent) definition may be given of the rank of a matrix and uses the
concept of sub-matrices. A sub-matrix of A is any matrix that can be formed from the elements
of A by ignoring one, or more than one, row or column. It may be shown that the rank of a
general m´ n matrix is equal to the size of the largest square sub-matrix of A whose
determinant is non-zero. Therefore, if a matrix A has an r ´ r sub-matrix S with S ¹ 0 , but
no (r + 1)´ (r + 1) sub-matrix with non-zero determinant then the rank of the matrix is r. From
either definition it is clear that the rank of A is less than or equal to the smaller of m and n.
é1 1 0 - 1ù é1 3 - 1ù
ê ú ê ú
(a) A = êê2 0 2 2 úú and (b) B = ê8 9 4 ú
ê ú
ê4 1 3 1 ú ê2 1 2 ú
ë û ë û
Solution
1 1 0 1 1 - 2 1 0 - 2 1 0 - 2
2 0 2= 2 0 2 = 2 2 2 = 0 2 2 = 0.
4 1 3 4 1 1 4 3 1 1 3 1
This implies the rank of A is not 3. The next largest square sub-matrices of A are of dimension
2´ 2 . Consider, for example, the 2´ 2 sub-matrix formed by ignoring the third row and the
1 1
third and fourth columns of A; this has determinant = (1´ 0) (2 ´ 1) = 2. Thus, A is
2 0
For example,
x yz 1
x 3 y 3z 2
2 x 3 y 2 1
x y z 2
is a nonlinear system (because of y2). The system is a non homogeneous linear system.
Remark. In more the general case in which m = n but det A = 0 , the inverse does not exist
and so any procedure using A- 1 would not work. In such circumstances, we consider more
carefully what solution means. Generally, when a solution vector x exists whose elements
simultaneously satisfy all the equations in the system – the equation will be said to be
consistent. Otherwise it is inconsistent
Matrices are helpful in rewriting a linear system in a very simple form. The algebraic properties
of matrices may then be used to solve systems. First, consider the linear system
ax by cz dw e
fx gy hz iw j
.
kx ly mz nw p
qx ry sz tw u
A X C
The matrix A is called the matrix coefficient of the linear system. The matrix C is called the
nonhomogeneous term. When C 0 , the linear system is homogeneous. The matrix X is the
unknown matrix. Its entries are the unknowns of the linear system. The augmented matrix
associated with the system is the matrix A | C , where
a b c d e
f g h i j
A | C
k l m n p
q r s t u
In general if the linear system has n equations with m unknowns, then the matrix coefficient
will be a n m matrix and the augmented matrix an n m 1 matrix. Now we turn our
Definition: Two linear systems with n unknowns are said to be equivalent if and only if they
have the same set of solutions.
9 3 4 7 x1
4 3 4 8 x2
1 1 1 3 x3
Switching the first and third rows (without switching the elements in the right-hand column
vector) gives
1 1 1 3 x1
4 3 4 8 x2
9 3 4 7 x3
Subtracting 9 times the first row from the third row gives
1 1 1 3 x1
4 3 4 8 x2
0 6 5 20 x3
Subtracting 4 times the first row from the second row gives
1 1 1 3 x1
0 1 0 4 x2
0 6 5 20 x3
Finally, adding -6 times the second row to the third row gives
1 1 1 x1 3
0 1 0 x 4
2
0 0 5 x3 4
4
which can be solved immediately to give x3 , back-substituting to obtain x2 4 (which
5
1
actually follows trivially in this example), and then again back-substituting to find x1 .
5
y z 2
2x 3z 5
x y z 3
Solution:
0 1 1 2
2 0 3 5
1 1 1 3
x yz 3 1 1 1 3
2 y z 1 0 2 1 1
yz 2 0 1 1 2
x yz 3 1 1 1 3
2 y z 1 0 2 1 1
3z 3 0 0 3 3
x yz 3 1 1 1 3
y 12 z 1
2 0 1 2
1 1
2
z 1 0 0 1 1
The last equation gives z 1 the second equation now gives y 1 . Finally the first equation
A I n N aN 1 N 1 a1 a0 0
and then substitute the i ' s , one by one, into Eq. to solve it for the eigenvector vi ' s .
Computation of Eigenvalues
For a square matrix A of order n, the number is an eigenvalue if and only if there exists a
non-zero vector x such that
Ax x
Using the matrix multiplication properties, we obtain
A In x 0
This is a linear system for which the matrix coefficient is A I n . We also know that this
system has one solution if and only if the matrix coefficient is invertible, i.e. det A I n 0
. Since the zero-vector is a solution and x is not the zero vector, then we must have
det A I n 0 .
1 2
A .
2 0
1 2
1 0 4 0
2 0
2 4 0
1 17 1 17
, and
2 2
In other words, the matrix A has only two eigenvalues.
0 1
A
0 1
Lecture Note prepared by Dr Henry Otoo Page 80
First, we find its eigenvalues as
1
A I 2 0
0 1
1 0, 1 0, 2 1
and then, get the corresponding eigenvectors as
0 1 v11 v21 0
A 1I v1
0 1 v21 v21 0
1 0, 1 0, 2 1
where we have chosen v11, v12, and v22 so that the norms of the eigenvectors
become one.
a 0 0 0
0 b 0 0
D
0 0 c 0
0 0 0 d
5. If A and B are similar, then they have the same characteristic polynomial (which
implies they also have the same eigenvalues).
Computation of Eigenvectors
Let A be a square matrix of order n and one of its eigenvalues. Let x be an eigenvector of
A associated to . We must have
Ax x or A In x 0 .
This is a linear system for which the matrix coefficient is A I n . Since the zero-vector is a
Remark. It is quite easy to notice that if x is a vector which satisfies Ax x , then the vector
y cx (for any arbitrary number c) satisfies the same equation, i.e. Ay y . In other words,
if we know that x is an eigenvector, then cx is also an eigenvector associated to the same
eigenvalue.
Example: Find the eigenvectors and eigenvalues of the linear transformation u Ax which
has the component form
u1 3x1 4 x2 ,
u2 5 x1 2 x2 .
or
2 5 14 0
1 2 2 7.
5 x1 4 x2 0,
5 x1 4 x2 0,
which implies
x1 4
.
x2 5
4 x1 4 x2 0,
5 x1 5 x2 0,
which implies
x1
1.
x2
a1 4 , 5 , a2 1 , 1 ,
Remark. In general, the eigenvalues of a matrix are not all distinct from each other (they can
be equal root/repeated root or complex root). In the next two examples, we discuss this
problem.
1 4
Example. Consider the matrix .
4 7
4 7
Hence the matrix A has one eigenvalue, i.e. -3. Let us find the associated eigenvectors. These
are given by the linear system
AX 3 X or A 3I 2 X 0
The above examples assume that the eigenvalue is real number. So one may wonder
whether any eigenvalue is always real. In general, this is not the case except for symmetric
matrices. The proof of this is very complicated. For square matrices of order 2, the proof is
quite easy. Let us give it here for the sake of being little complete.
a b
A
b c
a b
det A I 2 2 a c ac b 2 0
b c
This is a quadratic equation. The nature of its roots (which are the eigenvalues of A) depends
on the sign of the discriminant
a c 4 ac b2 .
2
a c 4b2
2
Therefore, is a positive number which implies that the eigenvalues of A are real numbers.
Remark. Note that the matrix A will have one eigenvalue, i.e. one double root, if and only if
0 . But this is possible only if a c and b 0 . In other words, we have A aI 2 .
First let us convince ourselves that there exist matrices with complex eigenvalues.
3 2
A
4 1
The characteristic equation is given by
3 2
2 2 5 0 .
4 1
2 i 16
1 2i
2
The trick is to treat the complex eigenvalue as a real one. Meaning we deal with it as a number
and do the normal calculations for the eigenvectors. Let us see how it works on the above
example.
We will do the calculations for 1 2i . The associated eigenvectors are given by the linear
system
AX 1 2i X
equation
1 i x y 0
Set x c , then y 1 i c . Therefore, we have
x c 1
X c
y c 1 i 1 i
where c is an arbitrary number.
Remark. It is clear that one should expect to have complex entries in the eigenvectors.
We have seen that 1 2i is also an eigenvalue of the above matrix. Since the entries of the
matrix A are real, then one may easily show that if is a complex eigenvalue, then its
then the vector X , obtained from X by taking the complex-conjugate of the entries of X, is
1
X c
1 i
where c is an arbitrary number.
coordinate frames, especially 3D spherical and cylindrical polars, and 2D plane polar,
coordinate systems.
The following section uses the Cartesian coordinate system with basis vectors
and assume that all vectors have the origin as a common base point. A vector a will be written
as
Negative vector
Since PQ QP 0 , you can write QP PQ. That is, QP is the Negative of vector PQ . So
the vector PQ has the same magnitude as the vector PQ but its direction is exactly opposite
to that PQ .
Vector equality
Two vectors are said to be equal if they have the same magnitude and direction. Equivalently
they will be equal if their coordinates are equal. So two vectors a a1e1 a2e2 a3e3 and
From the above you should be able to see that if two vectors a and b are parallel then one is
scalar multiple of the other, that is: a b
a
To find the unit vector in the direction of a, simply divide by its magnitude aˆ .
a
For example
If A 3xˆ 4 yˆ
then A 32 42 9 16 25 5
Scalar multiplication
If r is negative, then the vector changes direction: it flips around by an angle of 180°. Two
examples r 1 and r 2 are given below:
a b a 1 b .
defined as the addition a b . It is useful to remember that the vector a b goes from b
to a.
The following results follow immediately from the above definition of vector addition:
a) a b b a commutativity Figure 1.3(a)
b) a b c a b c a b c associativity Figure 1.3(b)
c) a 0 0 a a, where the zero vector is 0 0,0,0 .
d) a a 0
Unit vector
A unit vector in a normed vector space is a vector (often a spatial vector) whose length is 1
(the unit length). A unit vector is often denoted by a lowercase letter with a superscribed caret
or “hat”, like this: (pronounced "i-hat").
The normalized vector or versor u of a non-zero vector u is the unit vector co-directional with
u , i.e.,
u
u
u
where u is the norm (or length) of u . The term normalized vector is sometimes used as a
To normalize a vector a a1 , a2 , a3 , scale the vector by the reciprocal of its length a . That
is:
a a a a
a 1 e1 2 e2 3 e3
a a a a
Null vector
vector is 0,0,0 , and it is commonly denoted 0 , or 0, or simply 0. Unlike any other vector,
it does not have a direction, and cannot be normalized (that is, there is no unit vector which
is a multiple of the null vector). The sum of the null vector with any vector a is a (that is,
0 a a ).
It follows that:
The direction of the vector will reverse if is negative, but otherwise is unaffected. (By the way,
a vector where the sign is uncertain is called a director)
Position Vectors: If you have a fixed origin O and a point A, then the vector OA is defined to
be the position vector of the point A. The line segment representing OA starts a O and ends
Suppose you have two points A and B. the position vector of A is OA a . The position vector
of B is OB b . From the vector triangle, you can see that the vector AB is b a . Likewise,
the vector BA is a b .
Note that
Projection of
a b a b a b
2
a a b b 2a b
a 2 b2 2 a b
But, by the cosine rule for the triangle OAB (figure 1.4a), the length AB 2 is given by
a b a 2 b2 2ab cos
2
Which is independent of the co-ordinate system used, and that a b ab . Conversely, the
cosine of the angle between the two vectors a and b is given by cos a b ab .
n
a b aibi a1b1 a2b2 anbn
i 1
For example, the dot product of two three-dimensional vectors 1 3 5 and 4 2 1
is
Example
Find the angle between the vectors a 2i 3 j k and b i 5 j 4k , giving your answer
to the nearest tenth of a degree.
a 4 9 1 14
b 1 25 16 42
Also a b 2i 3 j k i 5 j 4k
2 15 4
13
13
cos 0.5361
588
Another way of describing the scalar product is as the product of the magnitude of one vector
and the component of the other in the direction of the first, since b cos is the component of
b in the direction of a and vice versa (Figure 1.4b)
Projection is particularly useful when the second vector is a unit vector – a iˆ is the component
of a in the direction of iˆ .
Notice that if we wanted the vector component of b in the direction of a, we would write
b a a .
b aˆ aˆ
a2
In the particular case a b 0 , the angle between the two vectors is a right angle and the
vectors are said to be mutually orthogonal or perpendicular – neither vector has any
component in the direction of the other.
An orthonormal coordinate system is characterized by iˆ iˆ ˆj ˆj kˆ kˆ 1 ; and
iˆ ˆj ˆj kˆ kˆ iˆ 0 .
a1
a a2
a3
b1
a b a b a1 , a2 , a3 b2 a1b1 a2b2 a3b3
T
b3
as N has rows (here denoted by n) and the result has size (rows x columns) of m p . So for
a iˆ a ˆj a kˆ
, ,
a a a
represent the cosines of the angle which the vector a makes with the co-ordinate vectors
iˆ, ˆj, kˆ and are known as the direction cosines of the vector a. Since a iˆ a1 etc, it follows
1
immediately that a a iˆ ˆj kˆ and 2 2 2 2 a12 a22 a32 1
a
iˆ ˆj kˆ
a b a1 a2 a3
b1 b2 b3
where the top row consists of the vectors iˆ, ˆj, kˆ rather than scalars.
Since a determinant with two equal rows has value zero, it follows that a a 0 . It is also
easily verified that a b a a b b 0 , so that a b is orthogonal (perpendicular) to both
a b a b a 2b2
2 2
which is again independent of the co-ordinate system used. This is left as an exercise. Unlike
the scalar product, the vector product does not satisfy commutativity but is in fact anti-
commutative, in that a b b a . Moreover the vector product does not satisfy the
associative law of multiplication either since, as we shall see later a b c a b c .
Since the vector product is known to be orthogonal to both the vectors which form the
product, it merely remains to specify its sense with respect to these vectors. Assuming that the
co-ordinate vectors form a right-handed set in the order i , j , k it can be seen that the sense
of the vector product is also right handed, i.e. the vector product has the same sense as the
co-ordinate system used.
iˆ ˆj kˆ
iˆ ˆj 1 0 0 kˆ
0 1 0
In practice, figure out the direction from a right-handed screw twisted from the first to second
vector as shown in Figure below.
determinant.
(a) i 2 j 5k 2i j 3k
2 i i i j 3 i k 4 j i 2 j j 6 j k 10 k i 5 k j 15 k k
0 k 3 j 4k 0 6i 10 j 5i 0
i 7 j 3k
iˆ ˆj kˆ
1 2 5
2 1 3
i 6 5 j 3 10 k 1 4
i 7 j 3k
sides are parallel to, and have lengths equal to the magnitudes of, the vectors a and b (Figure
1.6b). Its direction is perpendicular to the parallelogram.
Example
g is vector from A1, 2, 3 to B 3, 4, 5 . ˆ is the unit vector in the direction from O to A. Find
m̂ , a UNIT vector along g ˆ . Verify that m̂ is perpendicular to ˆ . Find n̂ , the third member
Hence
1 1
ˆ
m 2, 4, 2
14 24
and
1 1
ˆ
m 2, 4, 2
14 24
pseudo – determinant expression for the vector product, we see that the scalar triple product
can be represented as then true determinant
a1 a2 a3
a b c b1 b2 b3
c1 c2 c3
(iii) The fact that a b c a b c allows the scalar triple product to be written as
a, b, c. This notation is not very helpful, and we will try to avoid it below.
and direction perpendicular to the base. The component of c in this direction is equal to the
height of the parallelepiped shown in Figure 2.1(a)
Trial Questions
Let u 1,2,3 , v 2,1,2, and w 0,3,1 be vectors in R 3 . Find u * v * w, w * u * v and
v * w * u . What do you notice.
i j k
V *W 2 1 1
0 3 1
1 2 2 2 2 1
i j k
3 1 0 1 0 3
i 1 6 j 2 0 k 6 0
i 5 j 2 k 6
5,2,6
U * V * W 2,2,3 * 5,2,6
5 4 18
17
^ ^ ^
Example 1 Let a i 2 j and b 2 i j .Is a b ? Are the vectors a and b equal?
Solution
a 12 2 2 5
b 2 2 12 5
ab
But , the two vectors are not equal since their corresponding components are distinct.
Example 2
^ ^ ^
Find unit vector in the direction of vector a 2 i 3 j k
a 2 2 3 2 12 14
^ 1 ^ ^ ^
2 ^ 3 ^ 1 ^ ^ ^
Therefore a 2 i 3 j k i j k a i 2 j that has magnitude 7 units
14 14 14 14
^ ^
Example 3. Find a vector in the direction of vector a i 2 j
a b c, say 4i 3 j 2k
And c 4 2 3 2 2 29
2
Example 5
Write the direction ratios of the vector a i j 2k and hence calculate its direction cosines.
1 1 2
Thus, the direction cosines are , ,
6 6 6
^ ^ ^ ^ ^ ^
Example Find the angle between the vectors a i j k and b i j k
a *b
cos
ab
Now a * b i j k * i j k 1 1 1 1
^ ^ ^ ^ ^ ^
1
Therefore, we have cos
3
1
Hence the required angel is cos 1
3
^ ^ ^ ^ ^ ^
Example if a 5 i j 3 k and b i 3 j 5 k , then show that the vectors
^ ^ ^
^ ^ ^
^ ^ ^
And a b 5 i j 3 k i 3 j 5 k 4i 4 j 2 k
^ ^
^
^
^ ^
So ( (a b) * a b 6 i 2 j 8 k * 4 i 4 j 2 k 24 8 16 0
^ ^ ^ ^ ^
Example Find the projection of the vector a 2 i ^ 3 j 2 k on the vector b i 2 j k .
2
ab ab * ab
a *a a *b b*a b*b
We have
2
a 2 a *b b 2
ab 5
^ ^ ^ ^ ^
Example Find a b , if a 2 i j 3 k and b 3 i 5 j 2 k
We have
^ ^ ^
i j k
a *b 2 1 3
3 5 2
^ ^ ^
17 i 13 j 7 k
17 2 132 7 2
^ ^
a b
Hence
507
Example Find the area of a triangle having the points A1,1,1, B1,2,3 and C 2,3,1 and its
vertices.
AB * AC 16 4 1 21
1
The required area is 21
2
Example find the area of a parallelogram whose adjacent sides are given by the vectors
^ ^ ^ ^ ^ ^
a 3 i j 4 k and b i j k
Solution the area of a parallelogram with a and b as its adjacent sides given by a * b .
^
^ ^
i j k
^ ^ ^
a *b 3 1 4 5 i j 4 k
1 1 1
Now a * b 25 1 16
42