PHY202 2024 Note No 3

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

PHY202 (Spring 2024) Lecture Note No.

Kenji Nishiwaki *
Shiv Nadar Institution of Eminence, Department of Physics
Ver. 17 January 2024 (17h 33min in Indian ST)
We will provide a concise review on Linear Algebra.

1 On finite-dimensional vectors
1.1 Finite-dimensional Hilbert Space
In quantum mechanics, physical states (of a quantum system) are represented as vectors. The abstract
form |ψi, called ket vector, is widely used to represent a vector describing a physical state ψ.
A Hilbert space H is the complex vector space in which an inner product is well-defined and which
has completeness:
• For any two of the elements |ψ1 i , |ψ2 i ∈ V , if the summation |ψ1 i + |ψ2 i and the scalar multipli-
cation by an arbitrary complex number c (∈ C),
c |ψi = |c ψi = |ψi c
are well-defined, and both of their results are also elements of V , V is a complex vector space.
• For the existence of a well-defined inner product in V , a dual vector (living in a dual vector space)
hψ|, called bra vector, should exist for any element of V , |ψi. The inner product (in a complex
vector space) is defined as a mapping:

{hχ| , |ψi} → hχ|ψi or hχ| |ψi ∈ C, (1.1)
where |ψi , |χi ∈ V . The form of hχ|ψi is called braket.
• Completeness will be discussed later.
• In a vector space V , there is a vector (called zero vector or null vector) with special properties 0 (or
0 , ~0, |nulli),
|ψi + |nulli = |nulli + |ψi = |ψi , (1.2)
where |ψi is an arbitrary vector of V . An inner product should possess the following properties,

hχ| |ψ1 i + |ψ2 i = hχ|ψ1 i + hχ|ψ2 i , (1.3)
hχ|cψi = c hχ|ψi , (1.4)

hχ1 | + hχ2 | |ψi = hχ1 |ψi + hχ2 |ψi , (1.5)

hcχ|ψi = c hχ|ψi , (1.6)

hψ|χi = hχ|ψi , ⇒ if χ = ψ, hψ|ψi is real, (1.7)
hψ|ψi ≥ 0, (1.8)
hψ|ψi = 0 ⇔ |ψi = |nulli , (1.9)
*
e-mail: kenji.nishiwaki@snu.edu.in.

1
where you should be careful about the following point about bra vector,

hcψ| = c∗ hψ| = hψ| c∗ . (1.10)

• A norm of a ket vector |ψi (or the ket vector hψ|) is defined by use of the corresponding inner
product as
p
k|ψik := hψ|ψi (≥ 0) . (1.11)

The norm of a difference of two vectors |ψ1 i and |ψ2 i,


q  
k|ψ1 i − |ψ2 ik = hψ1 | + hψ2 | |ψ1 i + |ψ2 i (1.12)

describes the distance between |ψ1 i and |ψ2 i. Due to the non-negativity of the inner product, we
can conclude

k|ψ1 i − |ψ2 ik = 0 ⇔ |ψ1 i = |ψ2 i . (1.13)

• The vector c |ψi (c 6= 0) is represended as ‘parallel to |ψi’, and if hχ|ψi = 0, |χi and |ψi are
represented as ‘orthogoanl each other’ (|ψi ⊥ |χi).

• A unit vector is defined as a vector with a unit form. If |ψi is not a unit vector, as long as |ψi =
6 |nulli,
the renormalised vector
1 1
|ψi
c := |ψi = p |ψi (1.14)
k|ψik hψ|ψi
q
obeys the unit form hψ|
c |ψi
c = 1.

An important example of finite-dimensional Hilbert space is the n-dimensional complex Euclidean


space Cn (n ∈ N), where an element of Cn is concretely represented as
 
ψ1
X n  ψ2 

|ψi = ψi |b
ei i =  ..  . (1.15)
 
i=1
 . 
ψn

Here, the symbol = stands for “is represented by,” and |bei i is the unit ket vectors of the i-th direction in
1
Cartesian coordinates. In the second step of Eq. (1.15), we adopt the following correspondence between
the basis vectors and the column vectors (without mentioning the choice of the basis vectors manifestly):
     
1 0 0
0 1 0
|b
e1 i ⇔  ..  , |b
e2 i ⇔  ..  , ··· |b
en i ⇔  ..  . (1.16)
     
. . .
0 0 1
1
Precisely speaking, it does not make sense to display only the coefficients without an explicit indication of the basis
vectors. In the following, it shall be determined from the context what basis vectors are used.

2
The bra vector and the standard inner product in Cn are represented as follows:
Xn
χ∗i hb

ei | = χ∗1 , χ∗2 , · · · , χ∗n ,

hχ| = (1.17)
i=1
n
! n
!
X X
χ∗j hb

hχ|ψi = ej | ψi |b
ei i
j=1 i=1
n
X n
X
= χ∗j ψi hb
e |b
ei= χ∗i ψi . (1.18)
| j{z i}
i,j=1 i=1
=δij

Mathematically, any components cannot be specified before identifying basis vectors. On the other
hand, however, in physics, we simply introduce components under the implicit adoption of the Cartesian
basis vectors as follows:
 
ψ1
 ψ2 
|ψi =  ..  , (1.19)
 
 . 
ψn
where the equal symbol is simply used. Hereafter, we follow this naive custom in physics, where we

do not discriminate the introduction of concrete representations (=) and the equality (=). Here, the
summation of ket vectors and the scalar multiplication of a ket vector are represented as
     
ψ1 φ1 ψ1 + φ1
 ψ2   φ2   ψ2 + φ2 
|ψi + |φi =  ..  +  ..  =  , (1.20)
     
..
 .   .   . 
ψn φn ψn + φn
   
ψ1 cψ1
 ψ2   cψ2 
c |ψi = c  ..  =  ..  , (1.21)
   
 .   . 
ψn cψn
where c ∈ C, and the inner product is represented as
 
ψ1
n
 ψ2  X ∗
 
hχ|ψi = χ∗1 , χ∗2 , · · · , χ∗n  ..  = χi ψi . (1.22)
 .  i=1
ψn
Here, hχ| is reprented as the Hermitian conjugation of |χi as
 T
χ∗1
 χ∗ 
†  2
hχ| = (|χi) :=  ..  = χ∗1 , χ∗2 , · · · , χ∗n .

(1.23)
 . 
χ∗n
The squared norm of |ψi is calculated as
n
X n
X
2
k|ψik = hψ|ψi = ψi∗ ψi = |ψi |2 . (1.24)
i=1 i=1

3
1.2 Cauchy-Schwarz inequality
The following inequality, called Cauchy-Schwarz inequality, is realised for a pair of any vectors |ψi , |χi
in a Hilbert space H ,
 
hψ|ψi hχ|χi ≥ |hχ|ψi|2 = |hψ|χi|2 . (1.25)

This is one of the most important relationships in a Hilbert space.


[Proof of Eq. (1.25)]

• If hχ|χi = 0, due to the non-negativity, |χi should be the null vector |nulli. From |nulli + |nulli =
|nulli, we can conclude

hnull|ψi + hnull|ψi = hnull|ψi ∴ hnull|ψi = 0. (1.26)

Thus, both the left-hand and right-hand sides of Eq. (1.25) are zero, and the Cauchy-Schwarz
inequality holds.

• If hχ|χi =
6 0, for the following state with t ∈ C,

|θi := |ψi − t |χi , (1.27)

we evaluate the norm of |θi as

hθ|θi = hψ − tχ|ψ − tχi


= hψ|ψi + hψ| (−t) χi + h(−t) χ|ψi + h(−t) χ| (−t) χi
= hψ|ψi − t hψ|χi − t∗ hχ|ψi + |t|2 hχ|χi ≥ 0. (1.28)

Here, if we choose the arbitrary complex number t as


hχ|ψi
t→e
t := , (1.29)
hχ|χi
we get

hχ|ψi∗
2
hχ|ψi hχ|ψi
hθ|θi = hψ|ψi − hψ|χi − hχ|ψi + hχ|χi ≥ 0
hχ|χi hχ|χi hχ|χi
hχ|ψi hψ|χi hχ|ψi hψ|χi
⇔ hψ|ψi − hψ|χi − hχ|ψi + hχ|χi ≥ 0
hχ|χi hχ|χi hχ|χi2
|hχ|ψi|2
⇔ hψ|ψi ≥
hχ|χi
⇔ hψ|ψi hχ|χi ≥ |hχ|ψi|2 , (1.30)

where thus the Cauchy-Schwarz inequality holds. The equality of the Cauchy-Schwarz inequality
is realised when and only when

|θi = |nulli ⇔ |ψi = t |χi , (1.31)

for a complex number t.

4
Also, we note that the norm of the state |θi defined in Eq. (1.27) is minimised when we take t as e
t in
Eq. (1.29) as follows:
  hχ|ψi
hχ|θi = hχ| |ψi − e
t |χi = hχ|ψi − hχ|χi = 0, (1.32)
hχ|χi

where |θi is orthogonal to |χi. Here, due to the relation |ψi = e


t |χi + |θi, the vector |ψi is decomposed
into the parallel component t |χi (to |χi) and the orthogonal component |θi (to |χi). The vector
e

hχ|ψi |χihχ|ψi
|χi = (1.33)
hχ|χi hχ|χi

is called the projection of the vector |ψi toward the |χi-direction.

1.3 Basis
For several vectors in a Hilbert space H , |χ1 i , |χ2 i , · · · , |χk i (k ∈ N), when we focus on the equation,

c1 |χ1 i + c2 |χ2 i + · · · + ck |χk i = 0 (= |nulli), (1.34)

if only c1 = c2 = · · · = ck = 0 is the solution of it, a set of the vectors {|χ1 i , |χ2 i , · · · , |χk i} is called
linearly independent. Being not linear independent is called linearly dependent. If a set of vectors is
linearly dependent, at least one of the coefficients, c1 , c2 , · · · , ck , should be nonzero. For example, if
cj 6= 0, after defining bi := −ci /cj for i = 1, 2, · · · , j − 1, j + 1, · · · , k, we get

|χj i = b1 |χ1 i + b2 |χ2 i + · · · + (No j-th term exists.) + · · · + bk |χk i , (1.35)

which means that |χj i is written down as a linear combination of the other vectors, that is equal to
|χ1 i , |χ2 i , · · · , |χk i is linearly dependent.
A set of vectors such that the number of linearly independent vectors cannot be increased any further
is called, in particular, a basis of H . The number of vectors of a basis in H is called the dimension of
H . When the dimension of H is n (dimH = n) if we appropriately select n number of vectors, they
can be linearly independent. On the other hand, however, for any selections of n + 1 number of vectors,
they are linearly dependent.
Based on a basis {|χ1 i , |χ2 i , · · · }, an arbitrary vector |ψi of a Hilbert space H can be unfolded as a
linear combination,

|ψi = c1 |χ1 i + c2 |χ2 i + · · · , (1.36)

where the coefficients c1 , c2 , · · · are uniquely determined for a certain basis. If we choose a different basis
{|χ01 i , |χ02 i , · · · }, another unfolding expression is given as

|ψi = c01 |χ01 i + c02 |χ02 i + · · · , (1.37)

where in general c1 6= c01 , c2 6= c02 , · · · .


If the additional condition meaning their orthonormality is imposed on a basis as,
(
1 (r = s) ,
hχr |χs i = δrs = (1.38)
0 (r 6= s) ,

5
where δrs is the Kronecker’s delta symbol, the basis {|χ1 i , |χ2 i , · · · } is called an orthonormal basis of
H or a complete orthogonal system (CONS). If {|χ1 i , |χ2 i , · · · } is a CONS, the inner product between
hχr | and the unfolding expression of |ψi based on the CONS,
X
|ψi = cs |χs i , (1.39)
s

leads to
X  X X
hχr |ψi = hχr | cs |χs i = cs hχr |χs i = cs δrs = cr , (1.40)
s s s

where the corresponding expansion coefficient cr was derived only through the inner product. This
property is extremely useful. Furthermore, from the expression,
X X X
|ψi = cr |χr i = |χr i cr = |χr ihχr |ψi , (1.41)
r r r

we can speculate the naive expression,


X
|χr ihχr | = 1. (1.42)
r

This relation is also extremely useful in quantum mechanics.

1.4 Geometric meaning of Expansion formula

2 On linear operators in finite-dimensional H


2.1 Operator
As we discussed, physical states are represented as vectors in a Hilbert space, and physical operations can
transform such vectors in quantum theories. This transformation is a mapping from a vector |ψi ∈ H to
another vector |ψ 0 i ∈ H . Here, we introduce a symbolic notation of the mapping in an abstract way,

|ψ 0 i = O
b |ψi , (2.1)

where the symbolical element O b is called operator, which acts on |ψi and symbolically represent the
mapping from |ψi to |ψ 0 i.
In quantum mechanics/theories, if an operator describing a mapping A b : H → H obeys the following
properties for arbitrary vectors |ψ1 i , |ψ2 i , |ψi ∈ H and an arbitrary number c ∈ C,
 
A |ψ1 i + |ψ2 i = A
b b |ψ1 i + Ab |ψ2 i , (2.2)
 
Ab (c |ψi) = c A b |ψi , (2.3)

Ab is called a linear operator or just an operator as technical slang. We will adopt this slang for the
following part.

6
• For an arbitrary |ψi ∈ H , the following mapping Ib : H → H obeying
Ib |ψi := |ψi (2.4)
is called identity operator or unit operator.
• Two operators A
b1 and A
b2 are mentioned as equal each other (A
b1 = A
b2 ) if
b1 |ψi = A
A b2 |ψi (2.5)
holds for an arbitrary |ψi ∈ H .

• For a specific operator A


b : H → H , if the operation A
b0 : H → H which follows
b0 A
A b=A
bAb0 = Ib (2.6)
is well-defined and exists, the following is true for any vector |ψi,
b0 A
A b |ψi = |ψi . (2.7)

• From the operators A b and B,


b the addition of the operators A b + B,
b the scalar multiplication
cA (c ∈ C), and the product of the operators AB (and B A) are defined as follows:
b b b b b
 
A
b+B b |ψi := Ab |ψi + Bb |ψi , (2.8)
   
b |ψi := c A
cA b |ψi , (2.9)
   
A
bBb |ψi := Ab B b |ψi , (2.10)
   
B
bAb |ψi := Bb A b |ψi . (2.11)

This is called the inverse of A.


b

• For any vector |ψi ∈ H , the following operation,


0 |ψi = |nulli
b (2.12)

0,
is called zero operator. The following properties hold for b
A
b+b
0 = A,
b A
bb0=b
0Ab=b
0. (2.13)

• Note that, in general, A


bBb 6= B
b A.
::::::::::::::::::::::::::::::::
b For the identify operator, the following holds for any operator A,
b

A
bIb = IbA
b = A,
b (2.14)

which means that the identity operator is commutable with any operator A.b For measuring the
commutability between the two operators Ab and B,
b the following operator is widely used,
h i
A,
bB b := AbB b−B b A,
b (2.15)

which is called the commutator


h (between
i A
b and B).
b It is easy to recognise that A b and B b are
commutable (A bBb=B b A)
b if A,
bBb = 0. Sometimes, we also use the anti-commutator, defined as
n o
A,
bBb := AbB
b+B b A.
b (2.16)

7
• One can introduce an operator in the following form,
b := |χihξ| .
B (2.17)
b on the vector |ψi is recognised as,
The operation of B
 
Bb |ψi = |χihξ|ψi = hξ|ψi |χi , (2.18)
| {z }
∈C

which describes a map from |ψi to |χi.

The linear operators in the n-dimensional complex Euclidean space Cn (n ∈ N), which is a finite-
dimensional Hilbert space (H = Cn ), are represented as n × n matrices with complex components,
     
A11 A12 · · · A1n B11 B12 · · · B1n ψ1
 A21 A22 · · · A2n   B21 B22 · · · B2n   ψ2 
•  •  • 
A
b= ..  , B
b= ..  , |ψi =  ..  . (2.19)
  
 .. .. . .  .. .. . .
 . . . .   . . . .   . 
An1 An2 · · · Ann Bn1 Bn2 · · · Bnn ψn

b for |ψi is represented as


The operation A
 Pn 
i=1 A 1i ψi
 Pn A2i ψi 
b |ψi =  i=1
A , (2.20)

 ..
Pn .
 
i=1 Ani ψi

as the product of a matrix and a column vector in Cn . The identity operator and zero operators are
represented by the n × n identity and zero matrix,
   
1 0 ··· 0 0 0 ··· 0
0 1 · · · 0 0 0 · · · 0
•  • 
Ib =  .. .. . . ..  , 0 =  .. .. . . ..  . (2.21)
 b 
. . . . . . . .
0 0 ··· 1 0 0 ··· 0

The addition of the operators A+


b B,
b the scalar multiplication cAb (c ∈ C), and the product of the operators
A
bBb are represented by the rules of matrices in Cn ,
 
A11 + B11 A12 + B12 · · · A1n + B1n
 A21 + B21 A22 + B22 · · · A2n + B2n 
Ab+B b= , (2.22)

 .. .. .. ..
 . . . . 
An1 + Bn1 An2 + Bn2 · · · Ann + Bnn
 
cA11 cA12 · · · cA1n
 cA21 cA22 · · · cA2n 
cAb= ..  , (2.23)

 .. .. ...
 . . . 
cAn1 cAn2 · · · cAnn
  X n
AB
b b = Aik Bkj . (2.24)
ij
k=1

8
Also, by use of the representations,
   
χ1 ξ1
 χ2   ξ2 
•  • 
|χi =  ..  , |ξi =  ..  , (2.25)
 
 .  .
χn ξn

b := |χihξ| is represented as
the operator B
   
χ1 χ1 ξ1∗ χ1 ξ2∗ · · · χ1 ξn∗
∗ ∗
 χ2 
  ∗ ∗ ∗  χ2 ξ1 χ2 ξ2 · · ·
  χ2 ξn∗ 
B := |χihξ| =  ..  ξ1 , ξ2 , · · · , ξn =  .. ..  . (2.26)

b .. ..
 .   . . . . 
∗ ∗
χn χn ξ1 χn ξ2 · · · χn ξn∗

2.2 Hermitian conjugate


For arbitrary vectors |χi , |ψi ∈ H , the operator A
b† associated with the operator A
b that satisfies

hχ|Aψi b† χ|ψi
b = hA (2.27)

is called the Hermitian conjugate of A.b Note that, in the above, the right-hand side should be understood
as A b† |χi. In the
b |ψi, and the left-hand side should be recognised as the bra conjugate of the ket vector A
inner product, we can write Eq. (2.27) as follows:

hχ|A|ψi
b b† |χ|ψi ,
= hA (2.28)

where we remove the ket vector part |ψi, we get

hχ| A b† χ| .
b = hA (2.29)

The complex conjugate of Eq. (2.27) is given as

hAψ|χi
b b ∗ = hA
= hχ|Aψi b† χ|ψi∗ = hψ|A
b† χi , (2.30)

which leads to
b ∗ = hψ|A
hχ|Aψi b† χi , (2.31)

which is another expression of the Hermitian conjugate of A.


b
n
In the n-dimensional complex Euclidean space C (n ∈ N), we will derive the concrete form of the
Hermitian conjugate. Through the component representations,
n
X
hχ|ψi = (|χi)∗j |ψij , (2.32)
j=1
  n  
X
b |ψi =
A A
b (|ψi) , (2.33)
k
j jk
k=1

9
and we get
  n   
X 
⇔ AB |ψi =
b b A
b b |ψi ,
B
j jk k
k=1
n  n n    
!
X  X X
⇔ A
bBb (|ψi)` = A
b B
b (|ψi)` , (2.34)
j` jk k`
`=1 `=1 k=1
  n    
X
⇒ A
bBb = A
b B
b . (2.35)
j` jk k`
k=1

Also,
n
X  
hχ|Aψi
b = (|χi)∗j Ab |ψi
j
j=1
Xn  
= b (|χi)∗ (|ψi)
A j `
j`
`,k=1
Xn  ∗ ∗
= A (|χi)j (|ψi)` ,
b (2.36)
j`
`,k=1
Xn   ∗
b† χ|ψi =
hA b†
A (|χi)j (|ψi)` , (2.37)
`j
`,k=1

and thus, we get


   ∗
b†
A = A
b . (2.38)
`j j`

In general, for two vectors |ψ1 i , |ψ2 i ∈ H , if the following condition holds,
∀ |χi ∈ H , hχ|ψ1 i = hχ|ψ2 i , (2.39)
we can conclude,
|ψ1 i = |ψ2 i . (2.40)
It is easy to derive this property by choosing |χi = |ψ1 i − |ψ2 i. Similarly, we can derive the property:
for two operators A b1 , A
b2 , if the following condition holds,

∀ |χi , ∀ |ψi ∈ H , hχ|A


b1 |ψi = hχ|A
b2 |ψi , (2.41)
by use of the properties of the zero operator. The following properties of the Hermitian conjugate are
useful (it is not difficult to prove them),
 †
A
b+B b =A b† + B b†, (2.42)
 †
cAb = c∗ A
b† , (2.43)
 
AbBb =Bb†Ab† , (2.44)
h i h i
b† , B
A b† = − B b†, A
b† , (2.45)
 †
Ab† = A.
b (2.46)

10
2.3 Eigenvalue of Operator
In general, the transformed vector of |ψi generated by A
b |ψi is not parallel to |ψi, while for specific vectors
|vi, A
b |vi can be parallel to |ψi. For an operator A,
b the nonzero vector |vi =
:::::::
6 |nulli and corresponding
complex number λ satisfies
b |vi = λ |vi
A (2.47)

are called eigenvector and eigenvalue, respectively. A set of eigenvalues is called a spectrum.

• If |vi is an eigenvector of A,
b c |vi (c ∈ C) is also. This means that the norm of an eigenvector is
not uniquely determined.

• Since
 
b |vi = λ |vi
A ⇔ A − λI |vi = b
b b 0, (2.48)
 
if the inverse of the operator Ab − λIb exists, |vi is concluded as |nulli and no eigenvector exists
in this case.

• The necessary and sufficient condition for the existence of |vi is


 
det A b − λIb = 0, (2.49)

which is called the characteristic polynomial of A.


b

• Two eigenvectors corresponding to different eigenvalues are linearly independent.

• For an n × n complex matrix, if n eigenvalues do not degenerate, n eigenvectors are linearly


independent, and they form a basis. If n eigenvalues degenerate, n eigenvectors are not necessarily
linearly independent.

2.4 Eigenvalue of Hermitian Operator


b 6= A
In general, A b† . If A
b† = A,b such an operator is called a Hermitian operator. (Also, if Ab† = −A, b
such an operator is called an anti-Hermitian operator.) Hermitian operators have the following significant
properties,

(i) The eigenvalues of a Hermitian operator are real.

(ii) Eigenvectors belonging to different eigenvalues are orthogonal.

(iii) n linearly independent eigenvectors are always obtained, even when there is degeneracy in the
eigenvalues. By orthogonalising eigenvectors belonging to the same eigenvalue by the Gram-
Schmidt method, all eigenvectors can be made orthonormal. This means that we can prepare a
CONS from the eigenvectors of a Hermitian operator.

11
The proof of (i) is as follows. From Eq. (2.47), we get

hv|A|vi
b = hv|λvi = λ hv|vi
b† v|vi .
= hA (2.50)

b† = A,
If A b we get further

b† v|vi = hAv|vi
hA b = hλv|vi = λ∗ |v|vi , (2.51)

and thus, we reach

λ hv|vi = λ∗ hv|vi ⇔ (λ − λ∗ ) hv|vi = 0. (2.52)

Since |vi is not the null vector by definition, then hv|vi =


6 0. Therefore, we conclude

λ − λ∗ = 0 ⇔ λ = λ∗ , (2.53)

which means that the eigenvalues of a Hermitian operator are real.


The proof of (ii) is as follows. For an eigenvector belonging to a different eigenvector λ0 6= λ,
b |v 0 i = λ0 |v 0 i ,
A (2.54)

and we get from it


b 0 i = λ0 hv|v 0 i
hv|A|v
b 0 i = hA
= hv|Av b† v|v 0 i = hAv|v
b 0 i = hλv|v 0 i = λ∗ hv|v 0 i = λ hv|v 0 i , (2.55)

b† = A
where we have used A b and λ = λ∗ , and they lead to

λ0 hv|v 0 i = λ hv|v 0 i ⇔ (λ − λ0 ) hv|v 0 i = 0. (2.56)

Under the assumption λ 6= λ0 , we reach

hv|v 0 i = 0, (2.57)

which means that eigenvectors belonging to different eigenvalues are orthogonal. For the proof of (iii),
refer to a suitable linear algebra textbook.

2.5 Comment on degenerated eigenvalues


Sometimes,
n there are multiple eigenvectors
o belonging to a focused eigenvalue λ that are linearly indepen-
(1) (2) (k)
dent as |vλ i , |vλ i , · · · , |vλ i ,

b |v (s) i = λ |v (s) i .
A (2.58)
λ λ

The number of such eigenvectors (belonging to a focused eigenvalue λ) is called the degree of degeneracy,
and such eigenvalues are called degenerated. After a recombination of such eigenvectors, eigenvectors
can follow
(r) (s)
hvλ |vλ i = δrs (r, s = 1, 2, · · · , k) . (2.59)

12
The entire vector structure is represented as
n
X (r) (r)
|ψλ i = cλ |vλ i , (2.60)
r=1
n o
(1) (2) (k)
with coefficients cλ , cλ , · · · , cλ . Based on such eigenvectors of a Hermitian operator, we can
construct a CONS (after suitable orthogonalisation and normalisation), and an arbitrary vector |ψi ∈ H
is represented as

XX (r) (r)
|ψi = cλ |vλ i , (2.61)
λ r=1

where
(r) (r0 )
hvλ |vλ0 i = δλλ0 δrr0 . (2.62)

Here, if an eigenvector belonging to λ is not generated, r takes only 1, and we can skip depicting ‘(1).’
Every coefficient (for representing |ψi) is calculable only by taking the inner product,
(r) (r)
cλ = hvλ |ψi . (2.63)

2.6 Projection operator and Spectral decomposition


b in H describing H → H satisfies the following properties,
If an operator Π
 2
b † = Π,
Π b Π
bΠ b= Π b = Π, b (2.64)

it is called a projection operator, which is a Hermitian operator, and their eigenvalues are 0 or 1 (due to
λ2 = λ).
• The following projection operator is representing the projection to |χi,

b χ |ψi = |χihχ|ψi ,
Π b χ := |χihχ| .
Π (2.65)
hχ|χi hχ|χi

• If {|χ1 i , |χ2 i , · · · , |χk i} are orthonormal, the following one is a projection operator,
k
X k
X
b0 |ψi =
Π |χr ihχr |ψi , b0 :=
Π |χr ihχr | . (2.66)
r=1 r=1

n o
b |v (r) i , we can do
• Based on a CONS constructed by the eigenvectors of a Hermitian operator A, λ
the expansion (as we discussed),

XX kλ
XX
(r) (r) (r) (r)
|ψi = cλ |vλ i = |vλ ihvλ |ψi . (2.67)
λ r=1 λ r=1

13
By operating A
b from the left sides, we get

XX X Xkλ
b |ψi =
A b |v (r) ihv (r) |ψi =
A λ
(r) (r)
|vλ ihvλ |ψi
λ λ
λ r=1 λ r=1
X
= b λ |ψi ,
λΠ (2.68)
λ
with the projection operator toward the eigenvector λ,

X (r) (r)
Π
b λ := |vλ ihvλ | . (2.69)
r=1

Since |ψi is arbitrary, we get


X
A
b= λΠ
b λ, (2.70)
λ

which is called the spectrum decomposition of the operator A.


b

3 Matrix representation, Unitary transformation, Diagonalisation


Here, we will provide how to represent abstract vectors under an orthonormal basis (CONS).

3.1 Number vector representation of abstract vector


As we learned, a CONS of H is a set of vectors {|χ1 i , |χ2 i , · · · } that follows
hχr |χs i = δrs , (3.1)
and any vector |ψi in H can be represented uniquely as
X X X X 
|ψi = cr |χr i = hχr |ψi |χr i = |χr ihχr |ψi = |χr ihχr | |ψi , (3.2)
r r r r
where we can represent the completeness relation,
X 
|χr ihχr | = I.
b (3.3)
r
The index r runs from 1 to dimH . When two vectors are represented based on the same CONS as,
+ +
X X X X X X
|ψi = cs |χs i = |cs χs i = cs χs , |θi = dr |χr i = |dr χr i = dr χr ,
s s s r r r
(3.4)
their inner product is represented as,
X X X
hθ|ψi = d∗r cs hχr |χs i = d∗r cs δrs = d∗r cr . (3.5)
r,s r,s r

In particular for |ψi = |θi,


X
k|ψik2 = hψ|ψi = |cr |2 . (3.6)
r
The representations in Eqs. (3.5) and (3.6) are universal as long as the expansion is based on the common
CONS.

14
3.2 Matrix representation of abstract operator
We can insert the identity operator Ib anywhere relevant, and thus we see
b |ψi = IbA
A bIb |ψi
X  X 
= |χr ihχr | A
b |χs ihχs |
r s
X
= |χr i hχr |A|χ
b i hχs |ψi
| {z s} | {z }
r,s =cs
=:Ars
X
= |χr i Ars cs . (3.7)
r,s

Furthermore, if we choose |θi = A


b |ψi,
X
|θi = A
b |ψi = |χr i dr
r
X
= |χr i Ars cs , (3.8)
r,s

thus we get
X
dr = Ars cs , (3.9)
s

where this rule is equivalent to the product between a matrix and a column vector (as long as we use a
common CONS). Otherwise, we can focus on A b as
X
A
b = IbA
bIb = |χr ihχr |A|χ
b s ihχs | , (3.10)
r,s

where hχr |A|χ


b s i is the matrix element of the operator A
b (when we adopt {|χ1 i , |χ2 i , · · · }). In particular,
the matric element of the identity operator takes

Irs := hχr |I|χ


b s i = hχr |χs i = δrs , (3.11)

for any CONS, while that of the zero operator takes

Ors := hχr |b
0|χs i = 0, (3.12)

for any CONS.

3.3 Unitary transformation


In a Hilbert space H , an infinite number of CONSs is available. For example in H = C2 , an arbitrary
vector can be represented as
     
c1 1 0
|ψi = = c1 + c2 , (3.13)
c2 0 1

15
and the orthonormal vectors
   
1 0
|χ1 i := , |χ2 i := , (3.14)
0 1
are easily recognised as CONS in C2 . Also, we can easily show that
   
0 1 1 0 1 −1
|χ1 i := √ , |χ2 i := √ , (3.15)
2 1 2 1
is another CONS in C2 .
In general in a Hilbert space H , vectors of a CONS {|χ1 i , |χ2 i , · · · } and those of another CONS
{|χ01 i , |χ02 i , · · · } are related as
X X
|χs i = |χ0r i hχ0r |χs i = |χ0r i Urs , (3.16)
r r
with
Urs := hχ0r |χs i . (3.17)
For a vector |ψi ∈ H , when we expand it in the two ways,
X X
|ψi = |χs i cs = |χ0r i c0r ,
s r
X X
= |χ0r ihχ0r |χs i cs = |χ0r i Urs cs , (3.18)
s,r s,r

which leads to
X
c0r = Urs cs , (3.19)
s
where this follows the linear transformation by a matrix. Here, we can see
X X X

U † sr Urq = hχs |χ0r ihχ0r |χq i = hχs |χq i = δsq ,

Urs Urq = (3.20)
r r r

which means that Urs is a unitary matrix,


U †U = I = U U †. (3.21)
The following relation is significantly important,
U † = U −1 . (3.22)
Next, we focus on the unitary transformation of a matrix element. When we adopt the CONS
{|χ01 i , |χ02 i , · · · }, the corresponding matrix element is represented as
b 0s i
A0rs = hχ0r |A|χ
X
= hχ0r |χp ihχp |A|χ
b q ihχq |χ0 i
s
p,q
X

= Urp Apq Usq
p,q
X
Urp Apq U †

= qs
, (3.23)
p,q

which is the component representation of the unitary transformation of a matrix,


A0 = U AU † . (3.24)

16
3.4 Diagonalisation
Based on the eigen equation of the Hermitian operator A b† ),
b (= A

b |v (s) i = λ |v (s) i ,
A (3.25)
λ λ

we can get the relationship,


(r) b (s) (r) (s) (r) (s)
hvλ0 |A|v λ i = hvλ0 |λ|vλ i = λ hvλ0 |vλ i = λδλ0 λ δrs . (3.26)

3.5 Trace
In general, matrix elements are dependent on the choice of CONS. However, the trace and the determinant
of a matrix representation do not depend on the choice. Here, we focus on the trace.

X
TrA
b := hχr |A|χ
b r i = A11 + A22 + · · · (3.27)
r

• If the Hilbert space is finite-dimensional, the summation is among a finite number of elements, and
it converges.
• In a finite-dimensional Hilbert space, the following property is established,
  X
Tr A bBb = hχs |A b si
bB|χ
s
X
= hχs |A|χ
b r ihχr |B|χ
b si
r,s
X
= hχr |B|χ
b s ihχs |A|χ
b ri
r,s
X
= hχs |B b si
b A|χ
s

= Tr B A ,
b b (3.28)

which is called the cyclic property of trace. It is easy to generalise it as


     
Tr AB C = Tr C AB = Tr B C A
b b b b b b b b b (3.29)

and that more than three operators are involved.


• The trace of the identity operator equals the dimension of the Hilbert space,
 
Tr Ib = dimH . (3.30)

• If the matrices A and A0 are collected by a unitary transformation generated by U as


A0 = U AU † U U † = U †U = I ,

(3.31)
the trace of A0 equals that of A since,
Tr (A0 ) = Tr U AU † = Tr U † U A = Tr (A) .
 
(3.32)

17
• It is straightforward to show the basis independence of the trace,
X
TrAb= b 0r i
hχ0r |A|χ
r
X
= b q ihχq |χ0 i
hχ0r |χp ihχp |A|χ r
r,p,q
X
= b q ihχq |χ0r ihχ0r |χp i
hχp |A|χ
r,p,q
X
= hχp |A|χ
b q ihχq |χp i
| {z }
p,q
=δqp
X
= hχp |A|χ
b pi . (3.33)
p

18

You might also like