763313a Quantum Mechanics II - Solution S

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

763313A QUANTUM MECHANICS II - solution set 2 - spring 2014

1. Eigenvalues and Eigenvectors of a Diagonal Matrix


Let A be a diagonal and real 2 × 2 matrix whose diagonal elements are not
equal.
a) Find the eigenvalues and eigenvectors of A.
b) Find the Hermitian matrices that commute with A.
c) Find the eigenvalues and eigenvectors of the matrices we found in b).

Solution
a) Let us use the notation
 
a11 0
A= . (1.1)
0 a22

By using the definition of the matrix product, one can easily show that the
eigenvectors of a diagonal n × n matrix are the n-component standard basis
vectors. Furthermore, when one constructs that proof, one sees that the diagonal
elements are the eigenvalues. Thus the eigenvalues and eigenvectors of A are
 
1
λ = a11 : |a11 i = ,
0
  (1.2)
0
λ = a22 : |a22 i = .
1

b) We call a matrix B Hermitian if it is equal to its conjugate transpose, i.e.


B = B † . Consequently, a Hermitian 2 × 2 matrix is of the form
 
b b
B = ∗11 12 , (1.3)
b12 b22

where b11 , b22 ∈ R. That is, a 2 × 2 matrix is Hermitian if its diagonal ele-
ments are real and off-diagonal elements are complex conjugates of each other.
Anyway, direct calculation yields
    
a11 0 b11 b12 a11 b11 a11 b12
AB = = (1.4)
0 a22 b∗12 b22 a22 b∗12 a22 b22

and
    
b11 b12 a11 0 a11 b11 a22 b12
BA = = . (1.5)
b∗12 b22 0 a22 a11 b∗12 a22 b22

1
Employing Eqs. (1.4) and (1.5), we obtain
 
0 b12 (a11 − a22 )
AB − BA = ∗ . (1.6)
b12 (a22 − a11 ) 0

As we set this commutator equal to zero, we obtain b12 = 0 (because a11 6= a22 ).
Thus a Hermitian matrix B commutes with A if and only if it is diagonal.

c) We recall that the matrices we found in b) are diagonal Hermitian matrices.


We easily see that a diagonal Hermitian matrix is a diagonal real matrix. Thus,
for the case of unequal diagonal elements, we can refer to part a) of this prob-
lem. Similarly, for the case of equal diagonal elements, we can refer to part a)
of problem 2.

2
2. Degenerate Eigenvalues
Let A be a real and diagonal 2 × 2 matrix whose diagonal elements are equal.
Let B be a real and symmetric 2 × 2 matrix
 
0 b
B= .
b c
a) Find the eigenvalues and eigenvectors of matrix A.
b) Show that A and B commute.
c) Find the simultaneous eigenvectors of the matrices.

Solution
a) Let us use the notation
 
a 0
A= . (2.1)
0 a
We immediately see that
A = aI2×2 . (2.2)
Consequently, any X ∈ R2 is an eigenvector of A corresponding to eigenvalue a.
In problem 1 a), we stated that the eigenvectors of a diagonal n × n matrix
are the n-component standard basis vectors. However, now any X ∈ R2 is an
eigenvector of A. Indeed, now there are more eigenvectors than the stament
claims there to be. The explanation is that the statement is slightly incorrect.
To be more precise, it does not take into account the following fact: If eigen-
vectors X1 , X2 , · · · , Xk all correspond to the same eigenvalue, say λ, then any
linear combination c1 X1 + c2 X2 + · · · + ck Xk corresponds also to λ. However,
the statement gives us a complete orthonormal set of eigenvectors 1 . Since a
complete set of eigenvectors is just what we usually want to construct, the state-
ment is completely usable.

b) Because A is a constant times the identity matrix, A and B commute with


each other.

2 2
1
√ B reads λ − cλ − b1 = 0.
c) The characteristic equation of matrix √ By solving it,
we obtain the eigenvalues λ1 = 2 (c + c + 4b ) and λ2 = 2 (c − c2 + 4b2 ). We
2 2

note that the eigenvalues are unequal (we assume that B is not a zero matrix).
We obtain the eigenvectors of B from the equation
    
0 b x1 x
=λ 1 . (2.3)
b c x2 x2
1 We say that a set of eigenvectors is complete if all eigenvectors can be expressed as linear

combinations of the ones belonging to the set.

3
Written component-wise, Eq. (2.3) reads

−λx1 + bx2 = 0
(2.4)
bx1 + (c − λ)x2 = 0.

For the sake of convenience, we restrict ourselves to the case b 6= 0. Then we


obtain x2 = (λ/b)x1 from the first of Eqs. (2.4). Consequently, the eigenvalues
and eigenvectors of B are

 
1 2 2 1
λ1 = 2 (c + c + 4b ) : |λ1 i = c1 ,
λ1 /b
(2.5)

 
1 1
λ2 = 2 (c − c2 + 4b2 ) : |λ2 i = c2 ,
λ2 /b

where c1 , c2 ∈ C\{0}.
Let us yet normalize the eigenvectors. We could determine the normalization
factors by an easy mental calculation. However, for the sake of exercise, we carry
out the formal calculations. Setting hλ1 |λ1 i = 1, we obtain

λ2
   
1
c∗1 1 λ1 /b c1 = 1 ⇔ |c1 |2 1 + 21 = 1.

(2.6)
λ1 /b b

We see that normalization determines c1 up to an arbitrary phase factor. Choos-


ing c1 to be real and positive, we obtain
1
c1 = p . (2.7)
1 + λ21 /b2

Similarly we obtain
1
c2 = p . (2.8)
1 + λ22 /b2

Consequently, the eigenvalues and normalized eigenvectors of B are



 
1 2 2 √ 1 1
λ1 = 2 (c + c + 4b ) : |λ1 i = ,
1+λ21 /b2 λ1 /b
(2.9)

 
1 √ 1 1
λ2 = 2 (c − c2 + 4b2 ) : |λ2 i = .
1+λ22 /b2 λ2 /b

We recall that any X ∈ R2 is an eigenvector of A. Therefore |λ1 i and |λ2 i are


the simultaneous eigenvectors of A and B.

4
3. Diagonalization of a Matrix
Diagonalize the matrix B of the previous problem. Give an interpretation of
the matrix elements of the matrix obtained this way.

Solution
For every Hermitian matrix H there is a unitary matrix U so that
D = U HU −1 (3.1)
is diagonal. The columns of U −1 are the normalized eigenvectors of H. (Cf.
lecture notes p. 9.)
We recall that B is a real and symmetric matrix
 
0 b
B= . (3.2)
b c
The eigenvalues and eigenvectors of B are [cf. Eq. (2.9)]

 
1 √ 1 1
λ1 = 2 (c + c2 + 4b2 ) : |λ1 i = ,
1+λ21 /b2 λ1 /b
(3.3)

 
1 √ 1 1
λ2 = 2 (c − c2 + 4b2 ) : |λ2 i = .
1+λ22 /b2 λ2 /b
Thus the inverse of the unitary matrix that diagonalizes B is
√ 12 2 √ 12 2
 
1+λ /b 1+λ /b
U −1 =  λ1 /b1 2 . (3.4)
√ 2 2 √ λ2 /b2 2
1+λ1 /b 1+λ2 /b

From matrix theory we know that D = U BU −1 is diagonal with the eigen-


values of B as the diagonal elements. However, for the sake of exercise, let us
calculate D.
We introduce the notation

U = U1 U2 · · · Un . (3.5)
That is, we denote the ith column of U by Ui . By using the definition of the
matrix product, we can easily show that

BU = BU1 BU2 · · · BUn . (3.6)
So we cab write Eq. (3.4) as
U −1 = |λ1 i

|λ2 i . (3.7)
Thus
BU −1

= B|λ1 i B|λ2 i

= λ1 |λ1 i λ2 |λ2 i . (3.8)

5
Employing Eqs. (3.7) and (3.8), we obtain
−1
D = U
 BU 
hλ1 | 
= λ1 |λ1 i λ2 |λ2 i
hλ2 |
 
λ1 hλ1 |λ1 i λ2 hλ1 |λ2 i
=
λ1 hλ2 |λ1 i λ2 hλ2 |λ2 i
 
λ1 0
= . (3.9)
0 λ2

We got the expected result, namely that D is diagonal with the eigenvalues of
B as its diagonal elements.

6
4. The Simultaneous Diagonalization of Pauli Matrices
Show that there is no unitary matrix that diagonalizes both matrices σ1 and
σ2 .

Solution
We recall the definitions (cf. Wikipedia, Pauli matrices)
 
0 1
σ1 = , (4.1)
1 0
 
0 −i
σ2 = . (4.2)
i 0

We easily see that σ1 and σ2 are Hermitian. On the other hand, two Hermitian
matrices A and B can be diagonalized by the same unitary matrix U if and only
if they commute (cf. lectures p. 10). Therefore it suffices to show that σ1 and
σ2 do not commute.

7
5. Application of the Campbell-Baker-Hausdorff Lemma
Let A and B be two non-commuting operators. Show that
1
eA eB = eA+B+ 2 [A,B] (i)
holds up to second order in operator multiplication.

Solution
Let C be an operator. Furthermore, let f be a function that has a Maclaurin
series

X
f (x) = ak xk . (5.1)
k=0

We define

X
f (C) = ak C k , (5.2)
k=0

where C 0 is to be understood as the identity operator I. Thus, by definition,


1
e A = I + A + A2 + · · · , (5.3)
2
1
eB = I + B + B 2 + · · · . (5.4)
2
Employing Eqs. (5.3) and (5.4), we obtain
  
A B 1 2 1 2
e e = I + A + A + ··· I + B + B + ···
2 2
1 2 1 2 1 1 1
= I + A + A + B + AB + A B + B 2 + AB 2 + A2 B 2 + · · ·
2 2 2 2 4
1 2 1 2
= I + A + B + AB + A + B + · · · . (5.5)
2 2
By definition,
1 1
eA+B+ 2 [A,B] = I + A + B + (AB − BA)
2
 2
1 1
+ A + B + (AB − BA) + · · ·
2 2
1 1
= I + A + B + AB − BA
2 2
1 1 1 1
+ A2 + B 2 + AB + BA + · · ·
2 2 2 2
1 2 1 2
= I + A + B + AB + A + B + · · · . (5.6)
2 2
As we compare Eq. (5.5) to Eq. (5.6), we see that Eq. (i) holds up to second
order in operator multiplication.

You might also like