Professional Documents
Culture Documents
160.102 Linear Mathematics - Massey - Exam - S1 2017
160.102 Linear Mathematics - Massey - Exam - S1 2017
160.102 Linear Mathematics - Massey - Exam - S1 2017
102
MAN
Distance/Internal
G
MASSEY UNIVERSITY
EXAMINATION FOR
160.102 LINEAR MATHEMATICS
Please note that the Table of Useful Information is attached at the end of this question
paper.
Page 1 of 6
1701/160.102
MAN
Distance/Internal
G
1. Let
1 2
a = 0 , b = −3 .
1 0
[3 + 2 + 3 + 2 = 10 marks]
2
2. Consider the line L1 with the direction vector d = 1 and passing through the point
4
P1 = (3, −2, 0).
(a) Write down the equation of the line L1 in vector and parametric forms.
[4 + 1 + 5 = 10 marks]
Page 2 of 6
1701/160.102
MAN
Distance/Internal
G
x − 2y + 3z = 2
x+y+z =k
2x − y + 4z = k 2
(a) Use the Gaussian elimination method to determine the value(s) of k, if any, such
that the system has
(iii) no solutions.
[7 + 3 = 10 marks]
4. Let
2 1 −4 1 0 1
A = −4 −1 6 , B = 1 −1 , C = 0 .
−2 2 −1 0 1 0
Compute each of the following, where possible or explain why they are undefined:
(a) AB;
(b) BA;
(c) A−1 ;
[3 + 1 + 4 + 2 = 10 marks]
Page 3 of 6
1701/160.102
MAN
Distance/Internal
G
5. Let
0 1 5
B= 1 , 2 ,
u = 3 .
2 3 1
(c) Determine u B (the B-coordinate vector of u).
[1 + 3 + 2 = 6 marks]
[1 + 2 + 3 = 6 marks]
Page 4 of 6
1701/160.102
MAN
Distance/Internal
G
z−3
(b) Solve = i, for a complex number z ∈ C, and write z in real-imaginary form.
z−1
1
(c) Express in polar form.
5i
(d) Given that 1 − 2i is a root of P (z) = z 4 − 2z 3 + 9z 2 − 8z + 20, find all roots of P (z).
[2 + 4 + 3 + 5 = 14 marks]
0 1 −1
8. (a) Let A = 1 0 1 .
−1 1 0
1
(i) Show that v1 = −1 is an eigenvector of A.
1
What is its associated eigenvalue?
(ii) Show that the other two eigenvalues of A are both equal to 1, and find the
associated eigenvectors.
(iii) Use the Gram-Schmidt procedure to find an orthogonal basis for E(1), the
eigenspace of A with eigenvalue 1.
(v) Use your results so far to classify the quadratic form 2xy − 2xz + 2yz.
(b) Use the fact that det(X) = det(X T ) for any square matrix X to show that for all
square matrices A, the eigenvalues of A and AT are equal.
[(2 + 4 + 4 + 2 + 2) + 4 = 18 marks]
Page 5 of 6
1701/160.102
MAN
Distance/Internal
G
Maximize z = 3x + 4y
subject to x + y ≤ 6
x + 2y ≤ 8
3x + 2y ≥ 12
x≥0
y ≥ 0.
(c) Using the information from your graph in part (b), calculate the optimal point
algebraically.
[1 + 5 + 2 + 1 + 3 + 2 + 2 = 16 marks]
++++++++
Page 6 of 6
1701/160.102
MAN
Distance/Internal
160.102 USEFUL INFORMATION G
Geometry
u·v
• cos(θ) =
kukkvk
u·v
• proju (v) = u
u·u
• In R3 the vector equation of a line has the form
x = p + td
x1 = p1 + td1
x2 = p2 + td2
x3 = p3 + td3
x = p + su + tv
x1 = p1 + su1 + tv1
x2 = p2 + su2 + tv2
x3 = p3 + su3 + tv3
1
1701/160.102
MAN
Distance/Internal
Linear algebra G
• A matrix is in row echelon form (REF) if:
i) any zero rows are at the bottom,
ii) and in each non-zero row the first non-zero entry (the leading entry) lies to the
left of any leading entries below it.
• A matrix is in reduced row echelon form (RREF) if:
i) it is in REF,
ii) the leading entry in each non-zero row is 1,
iii) and each column containing a leading entry has zeros everywhere else.
• The span of the vectors v1 , v2 , . . . , vk is the set of all linear combinations c1 v1 +
c2 v2 + · · · + ck vk of the vectors.
• Vectors {v1 , v2 , . . . , vk } are said to be linearly independent if the only solution
to c1 v1 + c2 v2 + · · · + ck vk = 0 is c1 = c2 = · · · = ck = 0, and said to be linearly
dependent otherwise.
a b 1 d −b
• If A = , then det(A) = ad − bc, and A = −1
c d det(A) −c a
a11 a12 a13
• If A = a21 a22 a23 , then
a31 a32 a33
a22 a23 a21 a23 a21 a22
det(A) = a11 det − a12 det + a13 det
a32 a33 a31 a33 a31 a32
2
1701/160.102
MAN
Distance/Internal
• Two n × n matrices A and B are said to be similar if there exists an invertible n ×Gn
matrix P such that B = P −1 AP .
• An n × n matrix A is said to be diagonalisable if there exists a diagonal matrix D
that is similar to A.
• A set of vectors {v1 , v2 , . . . , vk } is said to be orthogonal if vi · vj = 0 for all
i, j = 1, 2, . . . , k with i 6= j.
• A set of vectors {v1 , v2 , . . . , vk } is said to be orthonormal if it is orthogonal and
each vector in the set is a unit vector (has length 1).
• An n×n matrix A is said to be orthogonal if the columns of A form an orthonormal
set.
Complex numbers
z = x + yi, where x, y ∈ R.
z = x − yi (conjugate), θ
x Re
eiθ + e−iθ eiθ − e−iθ
cos θ = , sin θ = ,
2 2i
√
z n = reiθ ⇔ z = n r ei(θ+2kπ)/n , k = 0, 1, . . . , n − 1.
If w is a root of a polynomial p(z) with real coefficients then so is w,
and thus (z − w)(z − w) is a factor of p(z).
3
1701/160.102
MAN
Distance/Internal
Linear programming G
Decision variable: The decision inputs that can be specified by the decision maker.
This is also known as a controllable variable (input).
Optimal solution: A feasible solution that maximises or minimises the objective function.
Slack value: The quantity S = RHS - LHS when the constraint is of the form
LHS ≤ RHS.
Surplus value: The quantity S = LHS - RHS when the constraint is of the form
LHS ≤ RHS.
Active constraint: When the slack or surplus value is zero (i.e. LHS = RHS) at the
optimal solution.
Sensitivity analysis: The study of how the change of coefficients of a linear programming
problem affect the optimal solution.
Range of optimality: The range of values over which an objective function coefficient may
vary without causing any change in the value of the decision variables
in the optimal solution.
Range of feasibility: The range of values over which the RHS of a constraint may vary
without causing any change in the nature of the optimal solution.
Reduced cost: The absolute value of the reduced cost is the amount by which an
objective function coefficient would have to improve (increase for a
maximisation problem, decrease for minimisation problem) before it
would be possible for the corresponding variable to assume a positive
value in the optimal solution.
Shadow price: The change in the value of the objective function per unit increase
in a constraint RHS.