Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

UNIVERSIDAD CARLOS III DE MADRID, DEPARTAMENTO DE MATEMÁTICAS.

Industrial Electronics and Automation Engineering.


Linear Algebra. Ordinary exam. Course 2022/23 (17/01/23)

Name and surname:


Group number:

Time: 180 minutes

• No electronic devices—including calculators, books, or notes can be used in the exam.


• All answers must be properly justified—otherwise they won’t be considered.
• Respond only to what you are being asked. Anything else you add may play against
you.
• Everything must be written using a pen (not a pencil).

There are 4 problems (look at the back)

P 1 [1,5 points]: Let G : R3 → R4 be the linear transformation defined by


G(x, y, z) = (x − y, −y − z, x + z, −x + y)t

a) (0,5 points) Determine a basis for the null and column space of G. Check that the
rank theorem is true.
b) (0,5 points) Compute an orthogonal basis of Col(G).
c) (0,5 points) Compute a basis of the Orthogonal Complement [N ul(G)]⊥ of N ul(G).

P 2: [1,5 points]: Consider the following subsets of P2 ,


B = {1 − t + t2 , t, 1 − t2 } , B 0 = {1 + t2 , −1 + t2 , 1 + t}

a) (0,5 points) Prove that B and B 0 are basis of P2


b) (0,5 points) What polynomial p(t) has coordinates [p]B 0 = (2, −2, 2)t in the basis B 0 ?
c) (0,5 points) What are the coordinates in the basis B of the polynomial p(t) found in
point b)?

1
P 3: [1,5 points]: Let A be the following matrix,
 
1 0 1
1 1 0
A= 1 1 1

1 0 1

a) (0,5 points) Compute the projection of the vector ~b = (1, −1, 1, −1)t over Col(A).
b) (0,5 points) Solve the least-squares problem related to the system, A~x = ~b.
c) (0,5 points) Compute the least-squares error associated with the least- squares solution
found in point b).

P 4: [1,5 points]: Let A be the following matrix,


 
2 0 0
A = 2 2 2
1 0 3

a) (0,2 points) Check that λ = 2 and λ = 3 are their eigenvalues, and compute their
algebraic multiplicities.
b) (0,6 points) Find the eigenspace of each eigenvalue.
c) (0,2 points) Diagonalize the matrix A (if it is not possible, explain why).
d) (0,2 points) Find an orthogonal diagonalization for the matrix A (if it is not possible,
explain why).
e) (0,3 points) If the matrix is diagonalizable, is the inverse diagonalizable too? Justify
your answer.

2
Solutions:

P 1: The echelon form of the matrix AG representing the linear transformation G is


       
1 −1 0 1 −1 0 1 −1 0 1 0 1
 0 −1 −1 0 −1 −1 0 −1 −1 0 1 1
 ∼ ∼ ∼  (1)
 1 0 1  0 1 1  0 0 0 0 0 0
−1 1 0 0 0 0 0 0 0 0 0 0

a) Then the null of G is

N ul(G) = {(x, y, z)t ∈ R3 : x − y = 0, y + z = 0} = span{(1, 1, −1)t }

so a basis is BN ul(G) = {(1, 1, −1)t }. On the other hand, the rank of AG is 2, a basis for the
Col(G) is
BCol(G) = {(1, 0, 1, −1)t , (−1, −1, 0, 1)t }
The dimension of the null space is 1 and the dimension of the column space is 2, the rank
theorem is truw:

nº of columns A = dim(Col(G)) + dim(N ul(G))

b) Gram-Schmidt process to the basis in part a). We take u1 = (1, 0, 1, −1)t , then
   
1 1
 0 −1
  h  1 ,  0i  1 −1  2/3 −1/3
  
−1
−1 −1 1
 −      0 = −1 −   =  −1
0
      
u2 ∝ 
 0  1  0  2/3  2/3
1 1
1  0  0 −1 1 −2/3 1/3
h  ,  i
 1  1
−1 −1

So, an othogonal basis for Col(G) es


oth
BCol(G) = {(1, 0, 1, −1)t , (−1, −3, 2, 1)t }

c) The orthogonal complement of N ul(G) are all the vector of R3 orthogonals to any
vector in N ul(G), such that [N ul(G)]⊥ = {~x ∈ R3 : h~x, ~bi = 0}. Taking the basis BN ul(G) =
{(1, 1, −1)t } (calculated in part a))
    
 1 −1 
⊥ t 3
[N ul(G)] = {(x, y, z) ∈ R : x + y − z = 0} = span  0 ,
  1
1 0
 

3
Also can be:    
 1 0 

[N ul(G)] = span  −1 , −1
 
0 −1
 

P 2: Let    
1 0 1 1 −1 0
PB ≡ PE0 ←B = −1 1 0 , PB 0 ≡ PE0 ←B 0 = 0 0 1
1 0 −1 1 1 0
be the change matrices las matrices of the B and B 0 with respect to the standard basis of
P2 , E0 = {1, t, t2 }.
a) Any set of linearly independent polynomials with a degree smaller or equal to 2 will be
a basis of P2 . We can check that the sets B y B 0 are linearly independent if the determinants
are different from zero PB y PB 0 ,

1 0 1 1 −1 0
−1 1 0 = −2 6= 0 , 0 0 1 = −2 6= 0
1 0 −1 1 1 0

Because the determinants are different from zero, and the rank is 3, both sets are a basis of
P2 .
b) Because the coordinates of the polynomial p(t) in the basis B 0 are (2, −2, 2) means:

p(t) = 2(1 + t2 ) − 2(−1 + t2 ) + 2(1 + t) = 6 + 2t

c) From part b) we know the coordinates in the standard basis for [p]E0 of the p(t) in the
standard basis E0 are (6, 2, 0)t , then its coordinates in the basis B are:
    
1 0 1 6 3
1
[p]B = PB←E0 [p]E0 = PB−1 [p]E0 = 1 2 1 2 = 5
2
1 0 −1 0 3

P 3: b) We start for the second part, calculating the normal equations and the solutions ~s
of the least-square problem, AT A~s = AT~b, where
 
  1 0 1  
1 1 1 1  4 2 3
1 1 0
AT A = 0 1 1 0  1 1 1
 = 2 2 1 ,
1 0 1 1 3 1 3
1 0 1

4
y  
  1  
1 1 1 1   0
T~ −1  
A b= 0 1 1 0  = 0 ,
  
1
1 0 1 1 1
−1
the solution is, ~s = (s1 , s2 , s3 )t = (−2, 1, 2)t .
a) Whit this solution, the projection of the vector ~b over the Col(A) space is
   
1 0 1   0
1 1 0 −2 −1
P rojCol(A)~b = A~s = 
1 1 1
  1 =   .
 1
2
1 0 1 0

c) The error wil be



e = k~b − A~sk= k(1, −1, 1, −1)t − (0, −1, 1, 0)t k= k(1, 0, 0, −1)t k= 2

P 4: a) The eigenvalues for A are the solution of the charasteristic polynomial,


2−λ 0 0
2 2−λ 2 = (2 − λ)2 (3 − λ) = 0
1 0 3−λ
Then, λ = 2 and λ = 3 are the eigenvalues with multiplicity 2 y 1.
b) For λ = 2, the system (A − λId)v = 0:
        
0 0 0 x 0 1 0
2 0 2 y  = 0 ⇒ x = −z ⇒ Vλ=2 = span{ 0  , 1}
1 0 1 z 0 −1 0
For λ = 3,
      
−1 0 0 x 0 0
 2 −1 2 y  = 0 ⇒ {x = 0, y = 2z} ⇒ Vλ=3 = span{2}
1 0 0 z 0 1

c) Because the eigenspace dimensions are equal to the multiplicities of the eigenvalues
λ = 2 and λ = 3, the matrix A is diagonalizable, we can express it like A = P DP −1 , with
   
2 0 0 1 0 0
D = 0 2 0 , P =  0 1 2
0 0 3 −1 0 1

5
d) A real matrix is orthogonally diagonalizable if and only if the matrix is symmetric.
Because AT 6= A, the matrix A is not orthogonally diagonalizable.
e) If a matrix A is diagonalizable means that exists a diagonal matrix D and an invertible
matrix P such that, A = P DP −1 . Then,

A−1 = (P DP −1 )−1 = (P −1 )−1 D−1 P −1 = P D−1 P −1

The inverse of a diagonal matrix D is a diagonal matrix where the diagonal elements are the
inverse of the diagonal elements of D, if the eigenvalues are different to zero,
   −1 
λ1 0 . . . 0 λ1 0 ... 0
 0 λ2 . . . 0   0 λ−1 . . . 0 
−1 2
D= ⇒ D =
   
 0 0 ... 0  ..
  
 0 0 . 0 
0 0 . . . λn 0 0 . . . λ−1 n

if λi 6= 0, ∀i = 1, . . . , n (if λi = 0 then λ−1


i doesn’t exit). Therefore, if A is diagonalizable
−1
with all the eigenvalues different to zero, A will be also diagonalizable (this is equivalent
to have |A|6= 0).

You might also like