Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Quantum Chemistry II: Math Introduction

Albeiro Restrepo
May 24, 2009

Contents
1 More about matrices 2
1.1 General definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Transpose matrix . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Adjoint matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Square matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Diagonal matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 Identity matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.3 Inverse matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.4 Unitary matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.5 Hermitian matrix . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.6 Trace or Character . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Useful matrix theorems . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Operators 7

1
1 MORE ABOUT MATRICES 2

1 More about matrices


Let’s continue our study of matrices and their algebra with a couple general defini-
tions and some more definitions that apply only to square matrices.

1.1 General definitions


The following definitions will apply to any matrix.

1.1.1 Transpose matrix


The transpose ATn×m of a matrix Am×n is a matrix resulting from interchanging
rows and columns in A such that their matrix elements are related by

³ ´
AT = (A)ji = Aji (1)
ij

Example:

 
à ! 1 4
1 2 3  
A2×3 = =⇒ AT
3×2 = 2 5 
4 5 6
3 6

1.1.2 Adjoint matrix


The adjoint A† of a matrix A is the complex conjugate of the transpose of A. Matrix
elements are related by

³ ´ ·³ ´ ¸∗
A† = AT = A∗ji (2)
ij ji

Example:

 
à ! 1 4
1 2i 3 †  
A= =⇒ A =  −2i 5 
4 5 6
3 6

If the matrix elements of A are real, then A† = AT . The matrix product between
the adjoint of a column matrix and the matrix itself is given by
1 MORE ABOUT MATRICES 3

 
a1
 
 a2 
³ ´
 ·

 Xn Xn
a† a = a∗1 a∗2 · · · a∗n 

=
 a ∗
i ai = |ai |2
 ·  i=1 i=1
 
 · 
an

which is exactly the magnitude of the vector represented by ~a. Therefore, from now
on, our definition of the scalar product between vectors with complex components
will be

n
X
~a · ~b = a† b = a∗i bi (3)
i=1

1.2 Square matrices


The following definitions will apply only to square matrices. A square matrix is one
for which the number of rows equals the number of columns.

1.2.1 Diagonal matrix


A diagonal matrix may contain elements different than zero only in the diagonal.
Matrix elements of a diagonal matrix satisfy

(A)ij = Aij δij (4)


Example:

 
1 0 0
 
A= 0 2 0 
0 0 3

1.2.2 Identity matrix


The identity matrix is a diagonal matrix containing only 1s in the diagonal. Matrix
elements for the identity matrix abide by
1 MORE ABOUT MATRICES 4

(I)ij = δij (5)

There is an identity matrix for every dimension: I2×2 , I5×5 , etc. It follows from the
definition that multiplying any matrix by I either by the left or by the right would
leave the matrix unchanged, hence the name identity matrix:

∀Am×m , ∃ Im×m such that Am×m Im×m = Im×m Am×m = Am×m

1.2.3 Inverse matrix


The inverse A−1 of a matrix A is a matrix such that when multiplied in any order
by A recovers the identity.

∀Am×m , if A−1 −1 −1
m×m exists, then Am×m Am×m = Am×m Am×m = Im×m

Not all square matrices have inverses. The -1 in the notation is intended to mean
the inverse, not an exponent, inasmuchlike sin−1 x stands for the inverse function of
the sine function.

1.2.4 Unitary matrix


A given matrix is unitary if its inverse equals its adjoint, that is,

U is unitary if U−1 = U†

Unitary matrices with real components (Uij ∈ <) are said to be orthogonal.

1.2.5 Hermitian matrix


A Hermitian matrix is its own adjoint:

A is Hermitian if A = A†

Hermitian matrices are very important in Quantum Mechanics. We will show later
that Hermitian matrices have real eigenvalues and that their eigenvectors are or-
thogonal.
1 MORE ABOUT MATRICES 5

1.2.6 Trace or Character


The trace or character of a matrix is the sum of the diagonal elements.

n
X
tr A = χ (A) = Aii (6)
i=1

Example:

 
1 2 3 X3
 
A =  4 5 6  =⇒ tr A = Aii = 1 + 5 + 9 = 15
7 8 9 i=1

Traces have some very important properties to be demonstrated later. For instance,
traces are invariant under unitary transformations (rotations, diagonalizations, etc.);
traces are also independent of the order in which matrices are multiplied. Characters
are very important in group theory, which we will be exploring in the near future.

1.3 Useful matrix theorems


I will now prove some very useful matrix theorems:

Theorem 1.1 The trace of the product of two matrices is invariant regardless the
order of multiplication.

Proof Take the products of two n × n matrices in reverse order to be

AB = C, BA = D

therefore,

n
X n
X
tr C = Cmm , tr D = Dpp
m=1 p=1

n
X n
X
Cij = Aik Bkj =⇒ Cmm = Amk Bkm =⇒
k=1 k=1
n
X n X
X n n X
X n
tr C = Cmm = Amk Bkm = Bkm Amk
m=1 m=1 k=1 m=1 k=1
1 MORE ABOUT MATRICES 6

the last step is possible because number multiplication is commutative (matrix multi-
plication is not). The above sum contains all the possible products between elements
of A and B, a total of n2 terms. Now, since the summation order is irrelevant, it
is legal to exchange the addition order, and since summation indexes are dummy
variables, we are allowed to change their labels; therefore,

n X
n n
à n ! n n
X X X X X
tr C = Bkm Amk = Bkm Amk = Dkk = Dpp = tr D
m=1 k=1 k=1 m=1 k=1 p=1

Theorem 1.2 The inverse of a product of matrices equals the product of the inverses
in reverse order.
Proof Take the products of the inverses of two matrices to be

B−1 A−1 = C
B−1 A−1 A = CA
B−1 = CA
B−1 B = CAB
I = C (AB)

because whatever is inside parenthesis must be the inverse of C, we have

(AB)−1 = C = B−1 A−1

Theorem 1.2 is known as the switch over rule; it also holds for the adjoint of a matrix
product.

Theorem 1.3 If U is unitary and A is related to B via the unitary transformation


A = U† B U, then B is related to A via the reverse transformation B = U A U† .
Proof If U is unitary, then U−1 = U† , therefore

A = U† BU
AU−1 = U† BUU−1
AU−1 = U† B
AU† = U−1 B
UAU† = UU−1 B
UAU† = B
2 OPERATORS 7

2 Operators
An operator is a rule to transform mathematical objects. A classical example of
operator is the derivative operator: takes a function and transform it into its deriva-
tive. An operator is said to be lineal if it is distributive with respect to addition
of its arguments (numbers, functions, vectors, matrices, etc.) and with respect to
scalar multiplication:

³ ´
O r~a + s~b = rO~a + sO~b or O (r |ai + s| bi) = rO |ai + sO| bi (7)

Derivative is a linear operator, while taking the square root is not. Linear operators
have some very nice and useful features. Consider for example a linear operator
acting on a given ket of its own solution space. Since the ket belongs to the solution
space of the operator, it can be written as a linear combination of the n kets of some
basis set {|ii}, and since it is linear, it can be distributed inside the sum:

à n ! n
X X
O |ψi = O ci |ii = ci O |ii (8)
i=1 i=1

This innocent looking equation is indeed very powerful, for it tells us that we do not
need to know what a linear operator does to every ket of its solution space, possibly
infinite in number, it suffices to know what the operator does on the reduced number
of kets in a given basis set, from there the entire space can be constructed (this is
also a consequence of the solutions of an operator constituting a vector space).

You might also like