Professional Documents
Culture Documents
Finite Groups
Finite Groups
Group Representations
α : G → GL(V ).
deg α = dimk V.
Remarks:
1. Recall that GL(V )—the general linear group on V —is the group of invert-
ible (or non-singular) linear maps t : V → V .
424–I 1–1
424–I 1–2
or in terms of coordinates,
x1 x1
. .
. → T ..
. 7
xn xn
α : G → GL(n, k),
G × X → X : (g, x) 7→ gx
satisfying
Now suppose X = V is a vector space. Then we can say that G acts linearly
on V if in addition
gv = α(g)v;
t:U →V
Examples:
1. Recall that the dihedral group D4 is the symmetry group of a square ABCD
y
B A
O x
C D
(Figure ??). Let us take coordinates Ox, Oy as shown through the centre O
of the square. Then
D4 = {e, r, r2 , r3 , c, d, x, y},
The map
g 7→ A(g) ∈ GL(2, R)
defines a 2-dimensional representation ρ of D4 over R. We may describe
this as the natural 2-dimensional representation of D4 .
(Evidently the symmetry group G of any bounded subset S ⊂ E n will have
a similar ‘natural’ representation in Rn .)
The representation ρ is given in matrix terms by
! ! ! !
1 0 0 −1 −1 0 0 1
e 7→ , r 7→ , r2 7→ , r3 7→ ,
0 1 1 0 0 −1 −1 0
! ! ! !
0 1 0 −1 1 0 −1 0
c 7→ , d→ 7 , x 7→ , y 7→ .
1 0 −1 0 0 −1 0 1
(g, x) 7→ gx.
Let
C(X) = C(X, k)
denote the space of maps
f : X → k.
424–I 1–5
gf (x) = f (g −1 x).
g(hf ) = (gh)f.
(gh)−1 = h−1 g −1
X = {x1 , . . . , xn }.
Then
deg ρ = n = kXk,
the number of elements in X. For the functions
(
1 if x = y,
ey (x) =
0 otherwise.
(ie the characteristic functions of the 1-point subsets) form a basis for C(X).
Also
gex = egx ,
since
gey (x) = ey (g −1 x)
1 if g −1 x = y
=
0 if g −1 x 6= y
1 if x = gy
=
0 if x 6= gy
Thus
g 7→ P (g)
424–I 1–6
1 0 0
0 1 0
(ab) 7→ 1 0 0
0 0 1
(These two instances actually define the representation, since (abc) and (ab)
generate S(3).)
3. A 1-dimensional representation α of a group G over k = R or C is just a
homomorphism
α : G → k×,
where k × denotes the multiplicative group on the set k \ {0}. For
GL(1, k) = k × ,
since we can identify the 1 × 1-matrix [x] with its single entry x.
We call the 1-dimensional representation defined by the identity homomor-
phism
g 7→ 1
(for all g ∈ G) the trivial representation of G, and denote it by 1.
In a 1-dimensional representation, each group element is represented by a
number. Since these numbers commute, the study of 1-dimensional repre-
sentations is much simpler than those of higher dimension.
In general, when investigating the representations of a group G, we start by
determining all its 1-dimensional representations.
Recall that two elements g, h ∈ G are said to be conjugate if
h = xgx−1
424–I 1–7
α(h) = α(x)α(g)α(x−1 )
= α(x)α(g)α(x)−1
= α(g)α(x)α(x)−1
= α(g),
Let us write
s = (abc), t = (bc).
Then (assuming k = C)
s3 = 1 =⇒ α(s)3 = 1 =⇒ α(s) = 1, ω or ω 2 ,
t2 = 1 =⇒ α(s)2 = 1 =⇒ α(t) = ±1.
But
tst−1 = s2 .
It follows that
α(t)α(s)α(t)−1 = α(s)2 ,
from which we deduce that
α(s) = 1.
It follows that S3 has just two 1-dimensional representations: the trivial
representation
1 : g 7→ 1,
and the parity representation
(
1 if g is even,
: g 7→
−1 if g is odd.
4. The corresponding result is true for all the symmetric groups Sn (for n ≥ 2);
Sn has just two 1-dimensional representations, the trivial representation 1
and the parity representation .
To see this, let us recall two facts about Sn .
424–I 1–8
g = τ1 · · · τr .
g = τ 1 · · · τr .
5. Let’s look again at the dihedral group D4 , ie the symmetry group of the
square ABCD. Let r denote the rotation through π/2, taking A into B; and
let c denote the reflection in AC.
It is readily verified that r and c generate D4 , ie each element g ∈ D4 is
expressible as a word in r and c (eg g = r2 cr). This follows for example
from Lagrange’s Theorem. The subgroup generated by r and c contains at
least the 5 elements 1, r, r2 , r3 , c, and so must be the whole group. (We shall
sometimes denote the identity element in a group by 1, while at other times
we shall use e or I.)
It is also easy to see that r and c satisfy the relations
r4 = 1, c2 = 1, rc = cr3 .
In fact these are defining relations for D4 , ie every relation between r and c
can be derived from these 3.
We can express this in the form
D4 = hr, c : r4 = c2 = 1, rc = cr3 i.
424–I 1–9
It is readily verified that all 4 of these satisfy the 3 defining relations for s
and t. It follows that each defines a homomorphism
α : D4 → k × .
We conclude that D4 has just 4 1-dimensional representations.
6. We look now at some examples from chemistry and physics. It should be
emphasized, firstly that the theory is completely independent of these ex-
amples, which can safely be ignored; and secondly, that we are not on oath
when speaking of physics. It would be inappropriate to delve too deeply
here into the physical basis for the examples we give.
First let us look at the methane molecule CH4 . In its stable state the 4
hydrogen atoms are situated at the vertices of a regular tetrahedron, with
the single carbon atom at its centroid (Figure ??).
The molecule evidently has symmetry group S4 , being invariant under per-
mutations of the 4 hydrogen atoms.
Now suppose the molecule is vibrating about this stable position. We sup-
pose that the carbon atom at the centroid remains fixed. (We shall return to
this point later.) Thus the configuration of the molecule at any moment is
defined by the displacement of the 4 hydrogen atoms, say
Xi = (xi1 , xi2 , xi3 ) (i = 1, 2, 3, 4).
424–I 1–10
While we could explicitly choose the coordinates qk , and determine the ki-
netic energy K, the potential energy form Q evidently depends on the forces
holding the molecule together. Fortunately, we can say a great deal about the
vibrational modes of the molecule without knowing anything about these
forces.
Since these two forms are positive-definite, we can simultaneously diag-
onalize them, ie we can find new generalized coordinates z1 , . . . , z6 such
that
K = ż12 + · · · + ż62 ,
V = ω12 z12 + · · · + ω62 z62 .
zj = Cj eiωj t .
The set of all solutions of these equations (ie all possible vibrations of the
system) thus forms a 6-dimensional solution-space V .
So far we have made no use of the S4 -symmetry of the CH4 molecule. But
now we see that this symmetry group acts on the solution space V , which
thus carries a representation, ρ say, of S4 . Explicitly, suppose π ∈ S4 is
a permutation of the 4 H atoms. This permutation is ‘implemented’ by
a unique spatial isometry Π. (For example, the permutation (123)(4) is
effected by rotation through 1/3 of a revolution about the axis joining the C
atom to the 4th H atom.)
But now if we apply this isometry Π to any vibration v(t) we obtain a new
vibration Πv(t). In this way the permutation π acts on the solution-space
V.
In general, the symmetry group G of the physical configuration will act on
the solution-space V .
The fundamental result in the representation theory of a finite group G (as
we shall establish) is that every representation ρ of G splits into parts, each
424–I 1–12
ρ=1+α+β
t 7→ eiωt .
(We shall see that the simple representations of an abelian group are always
1-dimensional.) In effect, Fourier analysis — the splitting of a function
424–I 1–13
z(x, y, t) (x2 + y 2 ≤ r2 )
where z is the height of the point of the drum at position (x, y).
It is not hard to establish that under small vibrations this function will satisfy
the wave equation
∂2z ∂2z ∂2z
!
T + =ρ 2,
∂x2 ∂y 2 ∂t
where T is the tension of the membrane and ρ its mass per unit area. This
may be written
∂2z ∂2z 1 ∂2z
!
+ = ,
∂x2 ∂y 2 c2 ∂t2
where c = (T /ρ)1/2 is the speed of the wave motion.
The configuration has O(2) symmetry, where O(2) is the group of 2-dimensional
isometries leaving the centre O fixed, consisting of the rotations about O
and the reflections in lines through O.
424–I 1–14
Although this group is not finite, it is compact. As we shall see, the repre-
sentation theory of compact groups is essentially identical to the finite the-
ory; the main difference being that a compact group has a countable infinity
of simple representations.
For example, the group O(2) has the trivial representation 1, and an infinity
of representations R(1), R(2), . . . , each of dimension 2.
The circular drum has corresponding modes M (0), M (1), M (2), . . . , each
with its characteristic frequency. As in our last example, after taking time
symmetry into account, the solution-space carries a representation ρ of the
product group O(2) × R, which splits into
8. In the last example but one, we considered the 4 hydrogen atoms in the
methane molecule as particles, or solid balls. But now let us consider a
single hydrogen atom, consisting of an electron moving in the field of a
massive central proton.
According to classical non-relativistic quantum mechanics [?], the state of
the electron (and so of the atom) is determined by a wave function ψ(x, y, z, t),
whose evolution is determined by Schrödinger’s equation
∂ψ
i~ = Hψ.
∂t
Here H is the hamiltonian operator, given by
~2 2
Hψ = − ∇ ψ + V (r)ψ,
2m
where m is the mass of the electron, V (t) is its potential energy, ~ is
Planck’s constant, and
The essential point is that this is a linear differential equation, whose solu-
tions therefore form a vector space, the solution-space.
424–I 1–15
O(3) = SO(3) × C2 ,
The state space V is now acted on by the Poincaré group E(1, 3) formed by
the isometries of Minkowski space-time. It follows that V carries a repre-
sentation of E(1, 3).
Each elementary particle corresponds to a simple representation of the
Poincaré group E(1, 3). This group is not compact. It is however a Lie
group; and — as we shall see — a different approach to representation the-
ory, based on Lie algebras, allows much of the theory to be extended to this
case.
A last remark. One might suppose, from its reliance on linearity, that rep-
resentation theory would have no rôle to play in curved space-time. But
that is far from true. Even if the underlying topological space is curved, the
vector and tensor fields on such a space preserve their linear structure. (So
one can, for example, superpose vector fields on a sphere.) Thus represen-
tation theory can still be applied; and in fact, the so-called gauge theories
introduced in the search for a unified ‘theory of everything’ are of precisely
this kind.
Bibliography
[4] Hermann Weyl. The Theory of Groups and Quantum Mechanics. Dover,
1950.
424–I 1–17
BIBLIOGRAPHY 424–I 1–18
Exercises
All representations are over C, unless the contrary is stated.
In Exercises 01–11 determine all 1-dimensional representations of the given group.
1 ∗ C2 2 ∗∗ C3 3 ∗∗ Cn 4 ∗∗ D2 5 ∗∗ D3
6 ∗∗∗ Dn 7 ∗∗∗ Q8 8 ∗∗∗ A4 9 ∗∗∗∗ An 10 ∗∗ Z
11 ∗∗∗∗ D∞ = hr, s : s2 = 1, rsr = si
Suppose G is a group; and suppose g, h ∈ G. The element [g, h] = ghg −1 h−1 is
called the commutator of g and h. The subgroup G0 ≡ [G, G] is generated by all
commutators in G is called the commutator subgroup, or derived group of G.
12 ∗∗∗ Show that G0 lies in the kernel of any 1-dimensional representation ρ of G,
ie ρ(g) acts trivially if g ∈ G0 .
13 ∗∗∗ Show that G0 is a normal subgroup of G, and that G/G0 is abelian. Show
moreover that if K is a normal subgroup of G then G/K is abelian if and only if
G0 ⊂ K. [In other words, G0 is the smallest normal subgroup such that G/G0 is
abelian.)
14 ∗∗ Show that the 1-dimensional representations of G form an abelian group
G∗ under multiplication. [Nb: this notation G∗ is normally only used when G is
abelian.]
15 ∗∗ Show that C ∗ ∼
n = Cn .
16 ∗∗∗ Show that for any 2 groups G, H
(G × H)∗ = G∗ × H ∗ .
17 ∗∗∗∗ By using the Structure Theorem on Finite Abelian Groups (stating that
each such group is expressible as a product of cyclic groups) or otherwise, show
that
A∗ ∼=A
for any finite abelian group A.
18 ∗∗ Suppose Θ : G → H is a homomorphism of groups. Then each representa-
tion α of H defines a representation Θα of G.
19 ∗∗∗ Show that the 1-dimensional representations of G and of G/G0 are in one-
one correspondence.
In Exercises 20–24 determine the derived group G0 of the given group G.
20 ∗∗∗ Cn 21 ∗∗∗∗ Dn 22 ∗∗ Z 23 ∗∗∗∗ D∞
24 ∗∗∗ Q8 25 ∗∗∗ Sn 26 ∗∗∗ A4 27 ∗∗∗∗ An
Chapter 2
Equivalent Representations
t:U →V
Remarks:
α : g 7→ A(g), β : g 7→ B(g).
B(g) = P A(g)P −1
for each g ∈ G. This is the condition in matrix terms for two representations
to be equivalent.
424–I 2–1
424–I 2–2
T = P SP −1 .
A necessary condition for this is that A, B have the same eigenvalues. For
the characteristic equations of two similar matrices are identical:
det P SP −1 − λI = det P det(S − λI) det P −1
= det(S − λI).
3. In general this condition is necessary but not sufficient. For example, the
matrices ! !
1 0 1 1
,
0 1 0 1
have the same eigenvalues 1,1, but are not similar. (No matrix is similar to
the identity matrix I except I itself.)
However, there is one important case, or particular relevance to us, where
the converse is true. Let us recall a result from linear algebra.
An n × n complex matrix A is diagonalisable if and only if it satisfies a
separable polynomial equation, ie one without repeated roots.
It is easy to see that if A is diagonalisable then it satisfies a separable equa-
tion. For if
λ1
...
A∼ λ1
λ2
..
.
then A satisfies the separable equation
m(x) ≡ (x − λ1 )(x − λ2 ) · · · = 0.
p(x) ≡ (x − λ1 ) · · · (x − λr ) = 0
Multiplying across,
1 = a1 Q1 (x) + · · · + ar Qr (x),
where
Y p(x)
Qi (x) = (x − λj ) = .
j6=i x − λi
Substituting x = A,
I = a1 Q1 (A) + · · · + ar Qr (A).
v = a1 Q1 (A)v + · · · + ar Qr (A)v
= v1 + · · · + vr ,
(A − λi )vi = ai p(A)v = 0.
E = {v : A1 v = λv}
v ∈ E =⇒ A1 (Ai v) = Ai A1 v = λAi v =⇒ Ai v ∈ E.
424–I 2–4
where s is the rotation through a right-angle (sending A to B), and c is the reflec-
tion in AC.
Now suppose we choose instead the axes OA, OB. Then we obtain the equiv-
alent representation
! !
0 −1 1 0
s 7→ c 7→ .
1 0 0 −1
S3 = hs, t : s3 = t2 = 1, st = ts2 i,
sf = ste = ts2 e = λ2 te = λ2 f.
α = β,
since one is got from the other by the swapping the basis elements: e, f 7→
f, e.
with respect to some basis. But then it follows that this remains the case
with respect to every basis: s is always represented by the matrix I.
In particular, s is always diagonal. So if we diagonalise c—as we know we
can—then we will simultaneously diagonalise s and c, and so too all the
elements of D4 .
Suppose ! !
1 0 λ 0
s 7→ , t 7→ .
0 1 0 µ
Then it is evident that
s 7→ 1, t 7→ λ
and
s 7→ 1, t 7→ µ
will define two 1-dimensional representations of S3 . But we know these
representations; there are just 2 of them. In combination, these will give 4
424–I 2–7
This is equivalent to the previous case, one being taken into the other by the
change of coordinates (x, y) 7→ (y, x). (In other words, + 1 = 1 + .)
We see from this that we obtain just 3 2-dimensional representations of S3
in this way (in the notation above they will be 1 + 1, 1 + and + ).
Adding the single 2-dimensional representation from the first case, we con-
clude that S3 has just 4 2-dimensional representations.
It is easy to see that no 2 of these 4 representations are equivalent, by consid-
ering the eigenvalues of s and c in the 4 cases.
424–I 2–8
Exercises
All representations are over C, unless the contrary is stated.
In Exercises 01–15 determine all 2-dimensional representations (up to equiva-
lence) of the given group.
1 ∗∗ C2 2 ∗∗ C3 3 ∗∗ Cn 4 ∗∗∗ D2 5 ∗∗∗ D4
6 ∗∗∗ D5 7 ∗∗∗∗ Dn 8 ∗∗∗ S3 9 ∗∗∗∗ S4 10 ∗∗∗∗∗ Sn
11 ∗∗∗∗ A4 12 ∗∗∗∗∗ An 13 ∗∗∗ Q8 14 ∗∗ Z 15 ∗∗∗∗ D∞
16 ∗∗∗ Show that a real matrix A ∈ Mat(n, R) is diagonalisable over R if and
only if its minimal polynomial has distinct roots, all of which are real.
17 ∗∗∗ Show that a rational matrix A ∈ Mat(n, Q) is diagonalisable over Q if and
only if its minimal polynomial has distinct roots, all of which are rational.
18 ∗∗∗∗ If 2 real matrices A, B ∈ Mat(n, R) are similar over C, are they necessar-
ily similar over R, ie can we find a matrix P ∈ GL(n, R) such that B = P AP −1 ?
19 ∗∗∗∗ If 2 rational matrices A, B ∈ Mat(n, Q) are similar over C, are they
necessarily similar over Q?
20 ∗∗∗∗∗ If 2 integral matrices A, B ∈ Mat(n, Z) are similar over C, are they
necessarily similar over Z, ie can we find an integral matrix P ∈ GL(n, Z) with
integral inverse, such that B = P AP −1 ?
The matrix A ∈ Mat(n, k) is said to be semisimple if its minimal polynomial has
distinct roots. It is said to be nilpotent if Ar = 0 for some r > 0.
21 ∗∗∗ Show that a matrix A ∈ Mat(n, k) cannot be both semisimple and nilpo-
tent, unless A = 0.
22 ∗∗∗ Show that a polynomial p(x) has distinct roots if and only if
SN = N S.
25 ∗∗∗∗ Suppose the matrix B ∈ Mat(n, C) commutes with all matrices that
commute with A, ie
AX = XA =⇒ BX = XB.
Show that B is expressible as a polynomial in A.
Chapter 3
Simple Representations
dim α ≤ kGk.
Proof I (1) is evident since a 1-dimensional space has no proper subspaces, stable
or otherwise.
For (2), suppose α is a simple representation of G in V . Take any v 6= 0 in V ,
and consider the set of all transforms gv of V . Let U be the subspace spanned by
these:
U = hgv : g ∈ Gi.
Each g ∈ G permutes the transforms of v, since
g(hv) = (gh)v.
It follows that g sends U into itself. Thus U is stable under G. Since α is simple,
by hypothesis,
V = U.
424–I 3–1
424–I 3–2
Remark: This result can be greatly improved, as we shall see. If k = C—the case
of greatest interest to us—then we shall prove that
1
dim α ≤ kGk 2
3. The dimension each simple representation σi divides the order of the group:
dim σi | kGk.
Of course we cannot use these results in any proof; and in fact we will not
even use them in examples. But at least they provide a useful check on our work.
Examples:
1. The first stage in studying the representation theory of a group G is to de-
termine the simple representations of G.
Let us agree henceforth to adopt the convention that if the scalar field k is
not explicitly mentioned, then we may take it that k = C.
We normally start our search for simple representations by listing the 1-
dimensional representations. In this case we know that S3 has just 2 1-
dimensional representations, the trivial representation 1, and the parity rep-
resentation .
Now suppose that α is a simple representation of S3 of dimension > 1.
Recall that
S3 = hs, t : s3 = t2 = 1, /; st = ts2 i,
where s = (abc), /; t = (ab).
424–I 3–3
se = λe,
where
s3 = 1 =⇒ λ3 = 1 =⇒ λ = 1, ω, or ω 2 .
Let
f = te.
Then
sf = ste = ts2 e = λ2 te = λ2 f.
Thus f is also an eigenvector of s, but with eigenvalue λ2 .
Now consider the subspace
U = he, f i
se = λe, sf = λ2 f, te = f, tf = t2 e = e.
V = U.
But this is the same representation as the first, since the coordinate
swap (x, y) 7→ (y, x) takes one into the other.
424–I 3–4
se = e, sf = f =⇒ sv = v for all v ∈ V .
P (ρI)P −1 = ρI,
if you like.)
So in this case we can turn to t, leaving s to ‘look after itself’. Let e
be an eigenvector of t. Then the 1-dimensional space
U = hei
se = e, /; te = ±e.
1, and α,
1: s 7→ 1, /; t 7→ 1
: s 7→ 1, /; t 7→ −1
! !
ω 0 0 1
α: s 7→ , t 7→ .
0 ω2 1 0
s 7→ ±1, t 7→ ±1.
U = he, f i
se = λe, sf = λ3 f, te = f, tf = t2 e = s2 e = λ2 e.
h(1, 1)i
1: s 7→ 1, /; t 7→ 1
µ: s 7→ 1, /; t 7→ −1
ν: s 7→ −1, /; t 7→ 1
ρ: s 7→ −1, /; t 7→ −1
! !
i 0 0 −1
α: s 7→ , t 7→ .
0 −i 1 0
E(λ) = {v ∈ V : av = λv}.
Thus E(λ) is stable under b, and so under A. But since V is simple, by hypothesis,
it follows that
E(λ) = V.
In other words a acts as a scalar multiple of the identity:
a = λI.
It follows that every subspace of V is stable under a. Since that is true for each
a ∈ A, we conclude that every subspace of V is stable under A. Therefore, since
α is simple, V has no proper subspaces. But that is only true if dim V = 1. J
424–I 3–7
D2 = {1, a, b, c : a2 = b2 = c2 = 1, bc = cb = a, ca = ac = b, ab = ca = c}.
This has just four 1-dimensional representations, as shown in the following table.
1 a b c
1 1 1 1 1
µ 1 1 −1 −1
ν 1 −1 1 −1
ρ 1 −1 −1 1
Chapter 4
4.1 Addition
Representations can be added and multiplied, like numbers; and the usual
laws of arithmetic hold. There is even a conjugacy operation, analogous to
complex conjugation.
g(u ⊕ v) = gu ⊕ gv.
Remarks:
L
1. Recall that U V is the cartesian product of U and V , where however we
write u ⊕ v rather than (u, v). The structure of a vector space is defined on
this set in the natural way.
α : g 7→ A(g), β : g 7→ B(g).
424–I 4–1
4.2. MULTIPLICATION 424–I 4–2
2. β + α = α + β;
3. α + (β + γ) = (α + β) + γ.
Proof I These are all immediate. For example, the second part follows from the
natural isomorphism
M M
V U →U V : v ⊕ u 7→ u ⊕ v.
4.2 Multiplication
Definition 4.2 Suppose α, β are representations of G in the vector spaces U, V
N
over k. Then αβ is the representation of G in U V defined by the action
Remarks:
4.2. MULTIPLICATION 424–I 4–3
N
1. The tensor product U V of 2 vector spaces U and V may be unfamiliar.
N
Each element of U V is expressible as a finite sum
u1 ⊗ v 1 + · · · + u r ⊗ v r .
If U has basis {e1 , . . . , em } and V has basis {f1 , . . . , fn } then the mn ele-
ments
ui ⊗ vj (i = 1, . . . , m; j = 1, . . . , n)
N
form a basis for U V . In particular
u1 ⊗ v 1 + · · · + u r ⊗ v r ,
where 2 sums define the same element if one can be derived from the other
by applying the rules
(u1 +u2 )⊗v⊗u1 ⊗v+u2 ⊗v, u⊗(v1 +v2 )⊗u⊗v1 +u⊗v2 , (ρu)⊗v = u⊗(ρv).
The structure of a vector space is defined on this set in the natural way.
α : g 7→ A(g), β : g 7→ B(g).
αβ : g 7→ A(g) ⊗ B(g).
To write out this matrix S ⊗ T we must order the index-pairs. Let us settle
for the ‘lexicographic order’
(1, 1), (1, 2), . . . , (1, n), (2, 1), . . . , (2, n), . . . , (m, 1), . . . , (m, n).
(In fact the ordering does not matter for our purposes. For if we choose a
different ordering of the rows, then we shall have to make the same change
in the ordering of the columns; and this double change simply corresponds
to a change of basis in the underlying vector space, leading to a similar
matrix to S ⊗ T .)
while
0 0 0 1
! !
0 1 O 0 1 0 0 1 0
= ,
1 0 1 0 0 1 0 0
1 0 0 0
4.2. MULTIPLICATION 424–I 4–5
We can simplify this by the change of coordinates (x, y, z, t) 7→ (y, z, t, x). This
will give the equivalent representation (which we may still denote by αβ):
1 0 0 0 0 1 0 0
0 1 0 0 1 0 0 0
αβ : s 7→ ,
t 7→
0 0 ω 0 0 0 0 1
0 0 0 ω2 0 0 1 0
But now we see that this splits into 2 2-dimensional representations, the second of
which is α itself:
α2 = β + α,
where β is the representation
! !
1 0 0 1
β : s 7→ , t 7→
0 1 1 0
The representation β can be split further. That is evident if we note that since
s is represented by I, we can diagonalise t without affecting s. Since t has eigen-
values ±1, this must yield the representation
! !
1 0 1 0
β : s 7→ , t 7→
0 1 0 −1
2. βα = αβ;
3. α(βγ) = (αβ)γ;
4. α(β + γ) = αβ + αγ;
5. 1α = α.
4.3. CONJUGACY 424–I 4–6
α + β = α + γ =⇒ β = γ.
4.3 Conjugacy
Definition 4.3 Suppose α = is a representation of G in the vector space V over
k. Then α∗ is the representation of G in the dual vector space U ∗ defined by the
action
(gπ)(v) = π(g −1 v) (g ∈ G, π ∈ V ∗ , v ∈ V )
Remarks:
1. Recall that the dual vector space V ∗ is the space of linear functionals
π : V → k.
α : g 7→ A(g).
(RS)−1 = S −1 T −1 , (RS)0 = S 0 R0 ,
It is easy to see that swapping the coordinates, (x, y) 7→ (y, x), gives
α∗ = α
Many of the representations we shall meet will share this property of self-conjugacy.
2. (α∗ )∗ = α;
3. (α + β)∗ = α∗ + β ∗ .
4. (αβ)∗ = α∗ β ∗ .
5. 1∗ = 1.
Semisimple Representations
α = σ1 + · · · + σr .
U has dimension 1, with basis {(1, 1, 1)}; W has dimension 2, with basis {(1, −1, 0), (−1, 0, 1)}.
Evidently
U ∩ V = 0.
Recall that a sum U + V of vector subspaces is direct,
U + V = U ⊕ V,
k3 = U
M
W.
ρ = 1 + α,
424–I 5–1
424–I 5–2
v = (x, y, z) (x + y + z = 0).
(12)v = (y, x, z) ∈ V ;
and so
v − (12)v = (x − y, y − x, 0) = (x − y)(1, −1, 0) ∈ V.
Hence
(1, −1, 0) ∈ V.
It follows that
(−1, 0, 1) = (132)(1, −1, 0) ∈ V
also. But these 2 elements generate W; hence
V = W.
ρ=1+α
V = S1 + · · · + S r .
Then V is semisimple.
424–I 5–3
S1 ∩ S2 = 0 or S2 .
S 1 + S2 = S1 .
(S1 + S2 ) ∩ S3 = 0 or S3 ,
S1 + S 2 + S3 = S1 + S2 .
Remark: The subset {Si1 , . . . , Sis } depends in general on the order in which we
take S1 , . . . , Sr . In particular, since Si1 = S1 , we can always specify that any one
of S1 , . . . , Sn appears in the direct sum.
1. V is semisimple;
424–I 5–4
Exercises
In Exercises 01–15 calculate eX for the given matrix X:
Z → GL(n, k) : m 7→ T m
5. Suppose k = GF(2) = {0, 1}, the finite field with 2 elements. Show that
the representation of C2 = {e, g} given by
!
1 1
g 7→
0 1
is not semisimple.
Chapter 6
(It’s not really necessary to divide by the order of the group; we do it because the
idea of ‘averaging over the group’ occurs in other contexts.)
It is easy to see that the resulting form is invariant:
1 X
P (gu, gv) = Q(hgu, hgv)
kGk h∈G
1 X
= Q(hu, hv)
kGk h∈G
= P (u, v)
424–I 6–1
424–I 6–2
g ∈ G, w ∈ U ⊥ =⇒ hu, wi = 0 ∀u ∈ U
=⇒ hgu, gwi = hu, wi = 0 ∀u ∈ U
=⇒ hu, gwi = hg(g −1 u), wi = 0 ∀u ∈ U
=⇒ gw ∈ U ⊥ .
Examples:
Then
U = {(x, x, x) : x ∈ R}.
which is just the complement we found before. This is not surprising since—
as we observed earlier—U and U ⊥ are the only proper stable subspaces of
R3 .
|x21 + x22 | = x1 x1 + x2 x2 .
Let us take
Q(x, y) = 2x̄x + ȳy − ix̄y + iȳx.
Note that
D4 = {e, s, s2 , s3 , t, ts, ts2 , ts3 }.
For these 8 elements are certainly distinct, eg
s2 = ts3 =⇒ ts = 1 =⇒ s = t.
424–I 6–4
Now
Averaging,
1X
P (x, y) = g ∈ D4 Q(g(x, y))
8
3
= (x̄x + ȳy)
2
det(A − λB) = 0,
E = {v : Av = λBv}
(1 − p)2 = 1 − 2p + p2 = 1 − 2p + p = 1 − p.
424–I 6–5
Note that
v ∈ im p ⇐⇒ pv = v.
Note also that
im(1 − p) = ker p,
since
v = (1 − p)w =⇒ pv = (p − p2 )w = 0,
while
pv = 0 =⇒ v = (1 − p)v.
Thus the splitting can equally well be written
M
V = im p ker p.
v =u+w (u ∈ U, w ∈ W )
then we set
pv = u.
(Although the projection p is often referred to as ‘the projection onto U ’ it
depends on W as well as U . In general there are an infinity of projections onto U ,
corresponding to the infinity of complements W . When there is a positive-definite
form on V —quadratic or hermitian, according as k = R or C—then one of these
projections is distinguished: namely the ‘orthogonal projection’ corresponding to
the splitting
U ⊥.
M
V =U
But we are not assuming the existence of such a form at the moment.)
Now suppose U is a stable subspace of V . Choose any complementary sub-
space W : M
V =U W.
In general W will not be stable under G. Our task is to find a stable complemen-
tary subspace W0 : M M
V =U W =U W0 .
424–I 6–6
(1 − p)g(1 − p) = g(1 − p)
gp = pg.
u ∈ U =⇒ gu ∈ U =⇒ p(gu) = gu.
Hence by addition
u ∈ U =⇒ P u = u.
Conversely,
v ∈ V =⇒ pgv ∈ U =⇒ gpgv ∈ U.
So by addition
v ∈ V =⇒ P v ∈ U.
These 2 results imply that P 2 = P , and that P projects onto U . J
Remarks:
qp = p, pq = q.
1 X −1
pP = pg pg
kGk g
1 X −1
= g pg
kGk g
= P.
V G = {v ∈ V : gv = v∀g ∈ G}.
3. It is worth noting that our alternative proof works in any scalar field k,
provided kGk = 6 0 in k. Thus it even works over the finite field GF(pn ),
unless p | kGk.
Of course we are not considering such modular representations (as rep-
resentations over finite fields are known); but our argument shows that
semisimplicity still holds unless the characteristic p if the scalar field di-
vides the order of the group.
Chapter 7
Remarks:
1. A G-map t : U → V is a linear map which preserves the action of G:
424–I 7–1
424–I 7–2
t:U →V
ker t = {u ∈ U : tu = 0} and im t = {v ∈ V : ∃u ∈ U, tu = v}
u ∈ ker t =⇒ tu = 0
=⇒ t(gu) = g(tu) = 0
=⇒ gu ∈ ker t,
while
v ∈ im t =⇒ v = tu
=⇒ t(gu) = g(tu) = gv
=⇒ gv ∈ im t.
ker t = 0 or U, im t = 0 or V.
ker t = 0, im t = V.
I(α, α) = 1.
E = E(λ) = {v ∈ V : tv = λv}
t = λ1.
V, W ) ∼
M M
hom(U = hom(U, W ) hom(V, W )
W) ∼
M M
hom(U, V = hom(U, V ) hom(U, W ).
Take the first. This expresses the fact that a linear map
M
t:U V →W
t1 : U → W, t2 : V → W.
424–I 7–4
V ⊂ U V ; and
L
t(u ⊕ v) = t1 u ⊕ t2 v.
In much the same way, the second result expresses the fact that a linear map
M
t:U →V W
t1 : U → V, t2 : U → W.
In fact
t1 = π1 t, t2 = π2 t,
L
where π1 , π2 are the projections of U V onto V, W respectively; and
tu = t1 u ⊕ t2 u.
V, W ) ∼
= hom(U, V ∗
O O
hom(U W ),
where
V ∗ = hom(V, k),
is rather more difficult to establish.
We can divide the task in two. First, there is a natural equivalence
hom(U, hom(V, W )) ∼
O
= hom(U V, W ).
V →
N
For this, note that there is a 1–1 correspondence between linear maps b : U
W and bilinear maps
B : U × V → W.
N
(This is sometimes taken as the definition of U V .) So we have to show how
such a bilinear map B(u, v) gives rise to a linear map
t : U → hom(V, W ).
hom(V, W ) ∼
= V∗
O
W.
For this, note first that both sides are ‘additive functors’ in W , ie
M M
hom(V, W1 W2 ) = hom(V, W1 ) hom(V, W2 ),
V∗ W2 ) = (V ∗ (V ∗
O M O M O
(W1 W1 ) W2 ).
hom(V, k) ∼
= V∗
O
k.
k∼
O
U =U
homG (U V, W ) ∼
= homG (U, W ) homG (V, W ),
M M
homG (U V, W ) ∼
= homG (U, V ∗
O O
W ).
since only those summands for which σi = σ will contribute to the sum. Thus
I(σ, α)
m= .
I(σ, σ)
It follows that σ will occur the same number m times in every expression for α
as a sum of simple parts. Hence two such expressions can only differ in the order
of their summands. J
Although the expression
α = σ1 + · · · + σr
α = eσ = σ + · · · + σ.
Proof I These results follow easily from the Uniqueness Theorem. But it is useful
to give an independent proof, since we can use this to construct an alternative
proof of the Uniqueness Theorem.
Lemma 7.1 Suppose
V = U1 + · · · + Ur
is an expression for the G-space V as a sum of simple spaces; and suppose the
subspace U ⊂ V is also simple. Then U is isomorphic (as a G-space) to one of
the summands:
U∼ = Ui
for some i.
Proposition 7.6 Every semimsimple G-space V is the direct sum of its isotypic
components: M M
V = Vσ1 ··· Vσr .
V = Vσ1 + · · · + Vσr .
for i = 2, . . . , r.
Suppose not. Then we can find a simple subspace
α = σ1 + · · · + σr
(where the σi are distinct) then V has a unique expression as a direct sum of
simple subspaces.
424–I 7–9
Remark: It is easy to see that multiplicity does give rise to non-uniqueness. For
suppose M
V =U U,
where U is simple. For each λ ∈ k consider the map
M
u 7→ u ⊕ λu : U → U U = V.
U (λ) = {u ⊕ λu : u ∈ U }.
U (λ) 6= U (µ) ⇐⇒ λ = µ.
It follows that M
V = U (λ) U (µ)
for any λ, µ with λ 6= µ.
Chapter 8
χ(g) = tr (α(g)) .
Remarks:
1. Recall that the trace of an n × n-matrix A is the sum of the diagonal ele-
ments: X
tr A = Aii .
1≤i≤n
424–I 8–1
424–I 8–2
Note that
tr ABC 6= tr BAC
in general. However the trace is invariant under cyclic permutations, eg
tr P AP −1 = tr P −1 P A = tr A :
Writing χ for χα
χ(e) = dim α = 2
χ(s) = i − i = 0
!
2 −1 0
χ(s ) = tr = −1 − 1 = −2
0 −1
!
3 −i 0
χ(s ) = tr = −i + i = 0
0 i
χ(t) = i − i = 0
!
0 i
χ(st) = tr =0
−i 0
!
2 0 −1
χ(s t) = tr =0
−1 0
!
3 0 −i
χ(s t) = tr =0
i 0
In summary
χ(e) = 2, χ(s2 ) = −2, χ(g) = 0 if g 6= e, s2 .
424–I 8–3
Aii Bjj (1 ≤ i ≤ m, 1 ≤ j ≤ n)
Thus
tr(A ⊗ B) = tr A tr B.
(3) If α takes the matrix form
g 7→ A(g)
Hence
χα∗ (g) = tr A(g −1 )0 = tr A(g −1 ) = χα (g −1 ).
(4) and (5) are immediate. J
Remark: In effect the character defines a ring-homomorphism
χ : R(G, k) → C(G, k)
Proof I It is sufficient to prove the result when α = 1. For on the left-hand side
χα (g −1 )χβ (g) =
X X
χα∗ (g)χβ (g)
g∈G g
χα∗ β(g)
X
=
g
By definition, if α is a representation in V ,
Now
hom(k, V ) = V,
with the vector v ∈ V corresponding to the map
λ 7→ λv : k → V.
homG (k, V ) = V G ,
V G = {v ∈ V : gv = v ∀g ∈ G}
that is,
1 X
π= α(g).
kGk g∈G
It is evident that πv ∈ V G for all v ∈ V , ie πv is invariant under G. For
1 X
gπv = ghv
kGk h∈G
1 X
= hv
kGk h∈G
= πv,
tr p = dim U.
V = im p ⊕ ker p.
Let e1 , . . . , em be a basis for imp = U , and let em+1 , . . . , en be a basis for ker p.
Then (
ei 1 ≤ i ≤ m,
pei =
0 m + 1 ≤ ilen.
It follows that the matrix of p with respect to the basis e1 , . . . , en is
1
..
.
1
P =
0
...
0
424–I 8–6
tr p = tr P = m = dim U.
C
Applying this to the averaging map π,
tr π = dim V G .
Thus
1 X
dim V G = χα (g),
kGk g
as we had to show. J
Proposition 8.2 If k = R,
If k = C,
χα∗ (g) = χα (g −1 ) = χα (g).
χα (g) = tr α(g) = λ1 + · · · + λn .
In fact, we can diagonalise α(g), ie we can find a basis with respect to which
λ1 0
g 7→ A(g) =
...
0 λn
Now
λ−1
1 0
A(g −1 ) = A(g)−1 =
..
.
0 λ−1
n
424–I 8–7
and so
χα (g −1 ) = tr A(g −1 ) = λ−1 −1
1 + · · · + λn .
But since G is a finite group, g n = e for some n (eg for n = kGk), and so
χα (g −1 ) = λ1 + · · · + λn = χα (g).
The result for k = R follows from this. For if A is a real matrix satisfying
An = I then we may regard A as a complex matrix, and so deduce by the argument
above that
tr(A−1 ) = tr A.
But since A is real, so is tr A, and thereforeHence
tr(A−1 ) = tr A.
by
1 P
kGk g∈G u(g)v(g) if k = C
hu, vi = 1 P
g∈G u(g)v(g) if k = R
kGk
2. I(α, β) = hχα , χβ i.
Proposition 8.4 Two representations are equivalent if and only if their characters
are equal:
α = β ⇐⇒ χα (g) = χβ (g) for all g ∈ G.
424–I 8–8
Proof I If α = β then
B(g) = P A(g)P −1
for some P . Hence
On the other hand, suppose χα (g) = χβ (g) for all g ∈ G. Then for each
simple representation σ of G over k,
1 X
I(σ, α) = χσ (g −1 )χα (g)
kGk g∈G
1 X
= χσ (g −1 )χβ (g)
kGk g∈G
= I(σ, β).
It follows that σ occurs the same number of times in α and β. Since this is true
for all simple representations σ,
α = β.
J
g 0 ∼ g =⇒ χα (g 0 ) = χα (g).
Proof I If
g 0 = xgx−1
then (since a representation g 7→ A(g) is a homomorphism)
A(g 0 ) = A(x)A(g)A(x−1 )
= A(x)A(g)A(x)−1 .
J
424–I 8–9
hχα , χβ i = 0.
I(α, β) = 0.
J
When k = C we can be a little more precise.
Proof I Again, this is simply a restatement of the result for the intertwining num-
ber. J
Theorem 8.2 The group G has at most s simple represenations over k, where s
is the number of classes in G.
X ⊂ C(G, k).
Proof of Lemma B Suppose the conjugacy classes are C1 , . . . , Cn . Let ci (g) denote
the characteristic function of Ci , ie
(
1 if g ∈ Ci ,
ci (g) =
0 otherwise
hvi , vj i = 0 if i 6= j.
Suppose
λ1 v1 + · · · + λr vr = 0.
Taking the inner product of vi with this relation,
λ1 hvi , v1 i + · · · + λr hvi , vr i = 0 =⇒ λi = 0.
Since this is true for all i, the vectors v1 , . . . , vr must be linearly independent. C
Now consider the simple characters of G over k. They are mutually orthogo-
nal, by the last Proposition; and so they are linearly independent, by the Lemma.
But they belong to the space X of class functions. Hence their number cannot
exceed the dimension of this space, which by Lemma 1 is s. J
Remark: We shall see that when k = C, the number of simple representations
is actually equal to the number of classes. This is equivalent, by the reasoning
above, to the statement that the characters span the space of class functions.
Our major aim now is to establish this result. We shall give 2 proofs, one
based on induced representations, and one of the representation theory of product
groups.
Example: Since characters are class functions, it is only necessary to compute
their values for 1 representative from each class. The character table of a group
G over k tabulates the values of the simple representations on the various classes.
By convention, if the scalar field k is not specified it is understood that we are
speaking of representations over C.
As an illustration, let us take the group S3 . The 6 elements divide into 3
classes, corresponding to the 3 cylic types:
13 e
21 (bc), (ac), (ab)
3 (abc), (acb)
It follows that S3 has at most 3 simple characters over C. Since we already know
3, namely the 2 1-dimensional representations 1, and the 2-dimensional repre-
sentation α, we have the full panoply.
We draw up the character table as follows:
class [13 ] [21] [3]
size 1 3 2
1 1 1 1
1 −1 1
α 2 0 −1
424–I 8–11
Proof I Let cx (t) denote the characteristic function of the 1-point subset {x}, ie
(
1 if t = x,
cx (t) =
0 otherwise.
The kXk functions cx (t) form a basis for the vector space C(X, k); and the action
of g ∈ G on this basis is given by
gcx = cgx ,
since
gcx (t) = cx (g −1 t) = 1 ⇐⇒ g −1 t = x ⇐⇒ t = gx.
It follows that with respect to this basis
g 7→ A(g),
424–I 8–12
In particular (
1 if x = gx,
Axx =
0 otherwise.
Hence X
χa lpha(g) = tr A = Axx = k{x : gx = x}k.
x
J
Example: Consider the action of the group S3 on X = {a, b, c}, Let us denote the
resulting representation by ρ. We only need to compute χρ (g) for 3 values of g,
namely 1 representative of each class.
We know that
χρ (e) = dim ρ = kXk = 3.
The transposition (bc) (for example) has just 1 fixed point, namely a. Hence
χρ (bc) = 1.
χρ (abc) = 0.
ρ = r · 1 + s · + t · α,
Definition 9.1 The regular representation reg of the group G over k is the per-
mutational representation defined by the action
(g, x) 7→ gx
of G on itself.
424–I 9–1
424–I 9–2
(g, x) 7→ xg −1
Proof I Plugging the result for the character of reg above into the formula for
the intertwining number,
1 X
I(α, reg) = χα (g −1 )χreg (g)
kGk g∈G
1
= kGkχα (e)
kGk
= dim α.
J
This result shows that every simple representation occurs in the regular repre-
sentation, since I(σ, reg) > 0. When k = C we can be more precise.
by the Proposition. J
Proof I This follows at once on taking the dimensions on each side of the identity
X
reg = (dim σ)sigma.
σ
kS5 k = 120;
ρ = 1 + α,
χα ([213 ]) = χρ ([213 ]) − 1
= 3−1
= 2.
Hence
α 6= α.
Thus we have found 4 of the 7 (or fewer) simple representations of S5 : 1, , α, α.
Our dimensional equation reads
120 = 12 + 12 + 42 + 42 + a2 + b2 + c2 ,
where a, b, c ∈ N, with a, b, c 6= 1. (We are allowing for the fact that S5 might
have < 7 simple representations.) In other words,
a2 + b2 + c2 = 86.
It follows that
a2 + b 2 + c 2 ≡ 6 (mod 8).
Now
n2 ≡ 0, 1, or 4 (mod 8)
according as n ≡ 0 (mod 4), or n is odd, or n ≡ 2 (mod 4). The only way to
get 6 is as 4 + 1 + 1. In other words, 2 of a, b, c must be odd, and the other must
be ≡ 2 (mod 4). (In particular a, b, c 6= 0. So S5 must in fact have 7 simple
representations.)
By (3) above, the 2 odd dimensions must be equal: say a = b. Thus
2a2 + c2 = 86.
a = b = 5, c = 6.
1, 1, 4, 4, 5, 5, 6.
Chapter 10
Induced Representations
2. (αβ)H = αH βH
3. (α∗ )H = (αH )∗
4. 1H = 1
5. dim αH = dim α
424–I 10–1
424–I 10–2
We have found nothing about χ([22 ]) and χ([4]), since these 2 classes don’t inter-
sect S3 . However, if we call the values x and y as shown, then the 2 equations
I(1, γ) = 0, I(, γ) = 1
give
15a + 3b − 6c + 3x + 6y = 0
3a + 15b − 6c + 3x + 6y = 0
Setting
s = a + b, t = a − b,
for simplicity, these yield
I(γ, γ) = 1.
424–I 10–3
Thus
24 = (s + 2c)2 + 6t2 + 3(−3s + 2c)2 + 8(s − c)2 + 6t2 .
On simplification this becomes
Noting that s, t, c are all integral, and that s, c ≥ 0, we see that there are just 3
solutions to this diophantine equation:
(gf )(x) = f (g −1 x)
Remark: That V is a G-subspace follows from the fact that we are acting on G
with G and H from opposite sides (G on the left, H on the right); and their actions
therefore commute:
(gx)h = g(xh).
Thus if F ∈ V then
ie gF ∈ V .
This definition is too cumbersome to be of much practical use. The following
result offers an alternative, and usually more convenient, starting point.
(b) V = g1 U 0 g2 U 0 gr U 0 .
L L L
...
Moreover the induced representation αG is uniquely characterised by the exis-
tence of such a subspace.
Remarks:
1. After the lemma, we may write
M M M
V = g1 U g2 U ... gr U.
ggi = gj h.
gv = ggi u = gj (hu).
424–I 10–5
3. The difficulty of taking this result as the definition of αG lies in the awk-
wardness of showing that the resulting representation does not depend on
the choice of coset-representatives.
It follows that the values of F on any coset gi H are completely determined by its
value at one point gi . Thus F is completely determined by its r values
Let us write
F ←→ (u1 , u2 , ..., ur ).
Then it is readily verified that
V = g1 U 0 g2 U 0 gr U 0 .
M M M
...
424–I 10–6
2. kT k kHk = kGk.
It is readily verified that these 2 conditions imply that each element g ∈ G is
uniquely expressible in the form
g = th (t ∈ T, h ∈ H)
(the Viergruppe). Let’s make the latter choice; the fact that T is normal (self-
conjugate) in G should simplify the calculations. We have
e1 = e, e2 = f, e3 = (ab)(cd)e, e4 = (ab)(cd)f,
e5 = (ac)(bd)e, e6 = (ac)(bd)f, e7 = (ad)(cb)e, e8 = (ad)(bc)f,
424–I 10–7
x = (a1 a2 . . . ar )(b1 b2 . . . bs ) . . .
hth−1 ∈ V4
ht = sh,
where s ∈ V4 .
Now let’s determine the matrix representing (ab). By the result above, we
have
Thus
In fact
(ab)e1 = e2 , (ab)e2 = e1 ,
(ab)e3 = e4 , (ab)e4 = e3 ,
(ab)e5 = e8 , (ab)e6 = e7 ,
(ab)e7 = e6 , (ab)e8 = e5 .
424–I 10–8
Hence
0 1 0 0 0 0 0 0
1 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0
0 0 1 0 0 0 0 0
(ab) →
7
0 0 0 0 0 0 0 1
0 0 0 0 0 0 1 0
0 0 0 0 0 1 0 0
0 0 0 0 1 0 0 0
It is not hard to see that (ab) and (abcd) generate S4 . So the representation
αS4 will be completely determined—in principle, at least—if we establish the
matrix representing (abcd). We see now that it was easier to detemine the matrix
representing (ab), because (ab) ∈ S3 . But the general case is not difficult. Notice
that
(abcd) = (ad)(bc) · (ac)
It follows that (for example)
h̄ ⊂ ḡ.
424–I 10–9
Or, to put it the other way round, each class ḡ in G splits into classes h1 , . . . , hr in
H:
ḡ ∩ H = h1 ∪ · · · ∪ hr .
where the sum extends over those coset-representatives gi for which gi−1 ggi ∈ H.
χβ (gi−1 ggi ),
X
χβ G (g) =
i
v = gi u;
ggi = gj h.
Then
gv = ggi u = gj (hu).
424–I 10–10
So
g(gi U ) ⊂ gj U.
Thus the basis elements in gi U cannot contribute to χβ G (g) unless i = j, that is,
unless ggi = gi h, ie
gi−1 ggi = h ∈ H.
Moreover if this is so then
g(gi ej ) = gi (hej ),
ie the m × m matrix defining the action of g on gi U with respect to the basis
gi e1 , ..., gi em is just B(h); so that its contribution to χβ G (g) is
χβ (h).
The result follows on adding the contributions from all those summands sent into
themselves by g. J
gi0 = gi h.
and
χβ (h−1 h0 h) = χβ (h0 ).
Thus if we sum over all the elements of G, we shall get each coset-contribution
just kHk times. J
To return to the proof of the Proposition, we compute how many times each
element h ∈ H occurs in the sum above.
Two elements g 0 , g 00 define the same conjugate of g in G, ie
−1 −1
g 0 gg 0 = g 00 gg 00 ,
g 00 N (g) = g 0 N (g),
424–I 10–11
where
N (g) = {x ∈ G : gx = xg}.
It follows that each G-conjugate h of g in H will occur just kN (g)k times in the
sum of Corollary 1. Thus if we sum over these elements h we must multiply by
kN (g)k.
The result follows, since
kGk
|N (g)k =
kḡk
by the same argument, each conjugate x−1 gx of g arising from kN (g)k elements
x. J
Examples:
1. Let us look again at αS3 →S4 . The classes of S4 and S3 are related as follows:
[14 ] ∩ S3 = [13 ]
[212 ] ∩ S3 = [21]
[22 ] ∩ S3 = ∅
[31] ∩ S3 = [3]
[4] ∩ S3 = ∅
Hence
24
χαS4 ([14 ]) = χα (13 ) = 8
6·1
24
χαS4 ([212 ]) = 3χα (21) = 0
6·6
χαS4 ([22 ]) = 0
24
χαS4 ([31]) = 2χα (3) = −1
6·8
χαS4 (4) = 0
Since
1 2
I(αS4 , αS4 ) = 8 + 8 · 12 = 3,
24
424–I 10–12
αS4 has just 3 distinct simple parts. whose determination is left to the reader.
The relation between S4 and S3 is unusual, in that classes never split. If ḡ
is a class in S4 then ḡ ∩ S3 is either a whole class h̄ in H, or else is empty.
This is true more generally for Sn and Sm (m < n), where Sm is identified
with the subgroup of Sn leaving the last n − m elements fixed. If ḡ is a class
in Sn , then
ḡ ∩ Sm = h̄ or ∅.
(abcd) 7→ i
We have
[14 ] ∩ C4 = {e}
[212 ] ∩ C4 = ∅
[22 ] ∩ C4 = {(ac)(bd)}
[31] ∩ C4 = ∅
[4] ∩ C4 = {(abcd), (adcb)}
Hence
24
χθS4 ([14 ]) = χθ (e) = 6
4·1
χθS4 ([212 ]) = 0
24
χθS4 ([22 ]) = χθ ((ac)(bd)) = −2
4·3
χθS4 ([31]) = 0
24
χθS4 ([4]) = (χθ ((abcd)) + χθ ((adcb)))
4·6
= i + (−i) = 0
Since
1 2
I(θS4 , θS4 ) =6 + 6 · 22 = 3,
24
S4
θ has just 3 distinct simple parts. whose elucidation we again leave to the
reader.
Proposition 10.2 1. (α + α0 )G = αG + α0 G ;
∗
2. (α∗ )G = (αG ) ;
3. dim αG = [G : H] dim α.
Lemma 10.4 Suppose G acts transitively on the finite set X. Let α be the corre-
sponding representation of G. Take x ∈ X; and let
Sx = {g ∈ G : gx = x}
α = 1Sx →G ,
Remark: The result is easily extended to non-transitive actions. For in that case the
set splits into a number of orbits, on each of which G acts transitively. On applying
the Proposition to each orbit, we conclude that any permutation representation can
be expressed as a sum of representations, each of which arises by induction from
the trivial representation of some subgroup of G.
Proof I By Definition 1,
α0 = 1Sx →G
is the representation in the subspace
V ⊂ C(G)
F (gh) = F (g),
ie F is constant on each coset gSx . Thus V can be identified with the space
C(G/Sx , k) of functions on the set G/Sx of Sx -cosets in G.
On the other hand, the G-sets X and G/Sx can be identified, with the element
gx ∈ X corresponding to the coset gSx . Thus
Thus G has at most s simple characters; and the result will follow if we can show
that every class function is a linear combination of characters.
It suffices for the latter to show that we can find a linear combination of char-
acters taking the value 1 on a given class ḡ, and vanishing on all other classes.
We can extend the formula in the Theorem above to define a map
from the space X(H, k) of class functions on the subgroup H ⊂ G to the space
X(G, k) of class functions on G, by
kGk X
f G (ḡ) = kh̄kf (h̄).
kHkkḡk h̄⊂ḡ
θ(g) = ω = e2πi/d .
1, θ, θ2 , . . . , θd−1 .
f = 1 + ω −1 θ + ω −2 θ2 + · · · + ω −(d−1) θd−1 .
Then (
d if i = 1,
f (hi ) = ,
0 if i = 0.
ie f vanishes off the H-class {g}, but is not identically 0.
It follows that the induced function f G (g) has the required property; it van-
ishes off ḡ, while
kGkd
f G (ḡ) = 6= 0.
kHkkḡk
424–I 10–16
Examples:
1. S3 has 3 classes: 13 , 21 and 3. So it has 3 simple representations over C, as
of course we already knew: namely 1, and α.
2. D(4) has 5 classes: {e}, {s2 }, {s, s3 }, {c, d} and {h, v}. So it has 5 simple
representations over C. We already know of 4 1-dimensional representa-
tions. In addition the matrices defining the natural 2-dimensional represen-
tation in R2 also define a 2-dimensional complex representation. (We shall
consider this process of complexification more carefully in Chapter ???.)
This representation must be simple, since the matrices do not commute, as
they would if it were the sum of 2 1-dimensional representations. Thus all
the representations of D4 are accounted for.
Proof I We have
1 X kGk X
IG (α, β G ) = kḡkχα (ḡ) kh̄kχβ (h̄)
kGk ḡ kHkkḡk h̄⊂ḡ
1 X
= kh̄kχα (h̄)χβ (h̄)
kHk h̄
= IH (αH , β).
J
This short proof does not explain why Frobenius’ Reciprocity Theorem holds.
For that we must take a brief excursion into category theory.
Let CG denote the category of G-spaces and G-maps. Then restriction and
induction define functors
S : CG → CH , , I : CH → CG .
Now 2 functors
E : C1 → C2 , F : C2 → C1
424–I 10–17
f : X → X0
in C1 the diagram
0
M(X,
F Y ) ←− M(X
, F Y )
M(EX, Y ) ←− M(EX 0 , Y )
is commutative, and similarly given any morphism
e:Y →Y0
in C2 the diagram
0
M(X,
F Y ) −→ M(X,
F Y )
M(EX, Y ) −→ M(EX, Y 0 )
is commutative.
It’s not difficult to establish—but would take us too far out of our way—that
the induction and restriction functors are adjoint in this sense: if V is a G-space,
and U a H-space, then
IH (αH , β) = IG (α, β G ).
Chapter 11
(α × β)G = αβ,
G = {(g, g) : g ∈ G} ⊂ G × G.
Proof I
424–I 11–1
424–I 11–2
Proof I We have
1 X
I(α1 × β1 , α2 × β2 ) = χα ×β (g, h)χα2 ×β2 (g, h)
|G||H| (g,h)∈G×H 1 1
1 X
= χα (g)χβ1 (h)χα2 (g)χβ2 (h)
|G||H| (g,h)∈G×H 1
1 X 1 X
= χα1 (g)χα2 (g) χβ (h)χβ2 (h)
|G| g∈G |G| h∈H 1
= I(α1 , β1 )I(α2 , β2 )
J
Recall that a representation α over C is simple if and only if
I(α, α) = 1.
unless α1 = α2 and β1 = β2 .) J
It is useful to give a proof of the last part of the Proposition not using the fun-
damental result that the number of simple representations is equal to the number
of classes; for we can give an alternative proof of this result using product groups.
Proof I (of last part of Proposition). Suppose γ is a representation of G × H in
W over C.
Consider the restriction γH of γ to the subgroup H = e × H ⊂ G × H. Let V
be a simple part of WH :
WH = V ⊕ · · ·
Let
X = homH (V, W )
424–I 11–3
h = x−1
0 gx0 .
Now
Thus
τ = σ1∗ × σ1 + · · · + σs∗ × σs .
where e(i.j) ∈ N.
424–I 11–5
Thus (
1 if σi∗ = σj
I(τ, σi × σj ) =
0 otherwise
In other words, σi × σj occurs in τ if and only if σi∗ = σj , and then occurs just
once. J
It follows from this result in particular that the number of simple representa-
tions is equal to I(τ, τ ).
Proof I We have
1 X
I(τ, τ ) = 2
|χτ (g, h)|2
|G| g,h
!2
1 X X |G|
=
|G|2 g h∼g |ḡ|
1 X |G|2
= |ḡ| 2
|G|2 g |ḡ|
X 1
= .
g |ḡ|
Since each class contributes |ḡ| terms to this sum, each equal to 1/|ḡ|, the sum is
equal to the number of classes. J
That proves the first part of the Theorem; the number of simple representations
is equal to I(τ, τ ), which in turn is equal to the number of classes.
424–I 11–6
τ = σ1∗ × σ1 + · · · + σs∗ × σs .
H × K → G : (h, k) 7→ hk
is an isomorphism.
A necessary and sufficient condition for this— supposing G finite—is that
hk = kh
2. |G| = |H||K|.
Now consider the symmetry group G of a cube. This has 48 elements; for
there are 8 vertices, and 6 symmetries leaving a given vertex fixed.
Of these 48 symmetries, half are proper and half improper. The proper sym-
metries form a subgroup P ⊂ G.
Let Z = {I, J}, where J denotes reflection in the centre of the cube. In fact
Z is the centre of G:
Θ : P → S4 .
ker Θ = {I}.
424–I 11–7
χπ ({J} × 4) = 0.
We have
1
I(π, 1 × 1) = (1 · 1 · 6 + 6 · 2 · 1 + 3 · 2 · 1 + 3 · 4 · 1 + 6 · 2 · 1) = 1
48
(as we knew it would be, since the action is transitive). Similarly,
1
I(π, η × 1) = (1 · 1 · 6 − 6 · 2 · 1 + 3 · 2 · 1 − 3 · 4 · 1 + 6 · 2 · 1) = 0,
48
1
I(π, 1 × ) = (1 · 1 · 6 − 6 · 2 · 1 + 3 · 2 · 1 + 3 · 4 · 1 − 6 · 2 · 1) = 0,
48
1
I(π, η × ) = (1 · 1 · 6 + 6 · 2 · 1 + 3 · 2 · 1 − 3 · 4 · 1 − 6 · 2 · 1) = 0.
48
It is clear at this point that the remaining simple parts of π must be of dimensions
2 and 3. Thus π contains either 1 × α or η × α. In fact
1
I(π, 1 × α) = (1 · 6 · 2 + 3 · 2 · 2 + 3 · 4 · 2) = 1.
48
The remaining part drops out by subtraction; and we find that
π = 1 × 1 + 1 × α + η × β.
Chapter 12
Exterior Products
v1 ∧ · · · ∧ vr (v1 , . . . , vr ∈ V ),
where
vπ1 ∧ · · · ∧ vπr = (π)v1 ∧ · · · ∧ vr
for any permutation π ∈ Sr .
This implies in particular that any product containing a repeated element van-
ishes:
· · · ∧ v ∧ · · · ∧ v ∧ · · · = 0.
(We are assuming here that the characteristic of the scalar field k is not 2. In fact
we shall only be concerned with the cases k = R or C.)
The exterior product ∧r V could be defined rigorously as the quotient-space
∧r V = V ⊗r /X,
424–I 12–1
12.1. THE EXTERIOR PRODUCTS OF A VECTOR SPACE 424–I 12–2
is a basis for ∧r V . (Note that there is one basis element corresponding to each
subset of {e1 , . . . , en } containing r elements.) It follows that if dim V = n then
∧r V = 0 if r > n;
while if r ≤ n then !
r n
dim ∧ V = .
r
Now suppose T : V → V is a linear map. Then we can define a linear map
∧r T : ∧r V → ∧r V
by
(∧r T )(v1 ∧ · · · ∧ vr ) = (T v1 ) ∧ · · · ∧ (T vr ).
(To see that this action is properly defined, it is sufficient to see that it sends the
subspace X ⊂ V ⊗n described above into itself; and that follows at once since
(∧r T ) (vπ1 ∧ · · · ∧ vπr )−(π)v1 ∧. . . vr = (T vπ1 )∧· · ·∧(T vπr )−(π)(T v1 )∧· · ·∧(T vr )
is again one of the spanning elements of X.)
In the case r = n, ∧n V is 1-dimensional, with the basis element
e1 ∧ · · · ∧ en ;
and
∧n T = (det T )I.
This is in fact the “true” definition of the determinant.
Although we shall not make use of this, the spaces ∧r V can be combined to
form the exterior algebra ∧V of V
∧r V,
M
∧V =
with the “wedge multiplication”
∧ : ∧r V × ∧s V → ∧r+s V
defined by
(u1 ∧ · · · ∧ ur ) ∧ (v1 ∧ · · · ∧ vs ) = u1 ∧ · · · ∧ ur ∧ v1 ∧ · · · ∧ vs ,
extended to ∧V by linearity.
Observe that if a ∈ ∧r V, b ∈ ∧s V then
b ∧ a = (−1)rs a ∧ b.
In particular the elements of even order form a commutative subalgebra of ∧V .
12.2. THE EXTERIOR PRODUCTS OF A GROUP REPRESENTATION
424–I 12–3
gei = λi ei (i = 1, . . . , n).
But now
from which the result follows, since these products form a basis for ∧r V . J
The n polynomials
X X
a1 = xi , a2 = xi1 xi2 , . . . , an = x1 · · · xn
i1 <i2
Proof I J
12.4. NEWTON’S FORMULA 424–I 12–5
f (t) = (1 − x1 t) · · · (1 − xn t)
= 1 − a1 t + a2 t2 − · · · + (−1)n an tn .
Then
f 0 (t)
= f rac−x1 1 − x1 t + · · · + f rac−xn 1 − xn t
f (t)
= −s1 − s2 t − s3 t2 − · · · .
Thus
−a1 +2a2 t−3a3 t2 +· · ·+(−1)n nan tn−1 = (1−a1 t+a2 t2 −· · ·+(−1)n an tn )(−s1 −s2 t−s3 t2 −· · · ).
Equating coefficients,
a1 = s1
2a2 = s 1 a1 − s 2
3a3 = s 1 a2 − s 2 a1 + s 3
...
rar = s1 ar − s2 ar−1 + · · · + (−1)r−1 sr
...
s1 = a1
s2 = a21 − 2a2
s3 = a31 − 3a1 a2 + 3a3
...
a1 = s 1
2a2 = s21 − s2
6a3 = s31 − 3s1 s2 + 2s3
...
12.5 Plethysm
There is another way of looking at the exterior product — as a particular case of
the plethysm operator on the representation-ring R(G).
Suppose V is a vector space over a field k of characteristic 0. (We shall only
be interested in the cases k = R or C.) Then the symmetric group Sn acts on the
tensor product V ⊗n by permutation of the factors:
V ⊗n = V Σ1 ⊕ · · · ⊕ V Σs ,
V P = V 1n , V N = V n .
P, N : V ⊗n → V ⊗n
12.5. PLETHYSM 424–I 12–7
defined by
1 X
P (v1 ⊗ · · · ⊗ vn ) = π(v1 ⊗ · · · ⊗ vn ),
n! π∈Sn
1 X
N (v1 ⊗ · · · ⊗ vn ) = (π)π(v1 ⊗ · · · ⊗ vn ).
n! π∈Sn
It follows that
X = ker N ;
and so (since N is a projection)
∧n V = V /X ∼
= im N = V N ;
that is, the nth exterior product of V can be identified with the -component of
V ⊗n .
Now suppose that V carries a representation α of some group G. Then G acts
on V ⊗n through the representation αn .
α n = α Σ1 + · · · + α Σs ,
Proof I We have
J
Since the actions of G and Sn on V ⊗n commute, they combine to define a
representation of the product group G × Sn on this space.
α Σ 1 × Σ1 + · · · + α Σ s × Σ s .
V = Vσ1 ⊕ · · · ⊕ Vσs
P = ρI
It follows that
1 α=σ
ρ=
0 α 6= σ
C In other words,
I α=σ
P =
0 α 6= σ
It follows that P acts as the identity on all simple G-subspaces carrying the repre-
sentation σ, and as 0 on all simple subspaces carrying a representation σ 0 6= σ. In
particular, P = I on Vσ and P = 0 on Vσ0 for all σ 0 6= σ. In other words, P is the
projection onto the component Vσ .
J
Chapter 13
Real Representations
V = C ⊗R U.
In practical terms,
V = U ⊕ iU,
ie each element v ∈ V is uniquely expressible in the form
v = u1 + iu2 (u1 , u2 ∈ U ).
424–I 13–1
424–I 13–2
On the other hand, suppose G acts on the vector space V over C. Then G also
acts on RV by the same rule
(g, v) 7→ gv.
CU = U ⊕ iU
Remarks:
1. Suppose β is described in matrix terms, by choosing a basis for U and giving
the matrices B(g) representing β(g) with respect to this basis. Then we can
take the same basis for CU , and the same matrices to represent Cβ(g). Thus
from the matrix point of view, β and Cβ appear the same. The essential
difference is that Cβ may split even if β is simple, ie we may be able to find
a complex matrix P such that
!
−1 C(g) 0
P B(g)P =
0 D(g)
2. C(ββ 0 ) = (Cβ)(Cβ 0 )
3. C(β ∗ ) = (Cβ)∗
4. C1 = 1
5. dim Cβ = dim β
7. I(Cβ, Cβ 0 ) = I(β, β 0 )
8. R(α + α0 ) = Rα + Rα0
9. R(α∗ ) = (Rα)∗
12. RCβ = 2β
13. CRα = α + α∗
A = X + iY,
where X, Y are real. Then—as we saw above—the entry Ars is replaced in Rα(g)
by the 2 × 2 matrix !
Xr,s −Yr,s
Yr,s Xr,s
424–I 13–4
Thus
X
tr Rα(g) = 2 Xrr
r
X
= 2<( Arr )
r
= 2< tr α(g)
= 2<χα (g)
= χα (g) + χα (g −1 )
since
χ(g) = χ(g −1 ).
13. This now follows on taking characters, since
Lemma 13.1 Given a representation α of G over C there exists at most one real
representation β of G over R such that
α = Cβ.
Proof I By Proposition 1,
Remarks:
424–I 13–5
1. In matrix terms α is real if we can find a complex matrix P such that the
matrices
P B(g)P −1
are real for all g ∈ G.
3. α = α∗
We have
(1) =⇒ (2) ⇐⇒ (3).
χα (g) = χβ (g).
Rα = RCβ = 2β.
Rα = β + β 0 .
Then by Proposition 1,
α + α∗ = CRα = Cβ + Cβ 0 .
But since α and α∗ are simple, this implies (by the unique factorisation theorem)
that
α = Cβ or α = Cβ 0 .
In either case α is real. J
This gives a (not very practical) way of distinguishing between the 3 classes:
R: α real ⇐⇒ χα real and Rα splits
R: Cβ = α is simple
C: Cβ = α + α∗ ,
I(β, β) = 2.
I(β, β) = 4.
Proof I Since
RCβ = 2β,
Cβ cannot split into more than 2 parts. Thus there are 3 possibilities:
1. Cβ = α is simple
Since
I(β, β) = I(Cβ, Cβ)
by Proposition 1, the values of I(β, β) in the 3 cases follow at once. Thus it only
remains to show that α is in the class specified in each case, and that α0 = α∗ in
case (3).
In case (1), α is real by definition.
In case (2),
2χα (g) = χ2α (g) = χβ (g)
424–I 13–8
is real for all g ∈ G. Hence χα (g) is real, and so α is either real or quaternionic.
If α were real, say α = Cβ 0 , we should have
Cβ = 2Cβ 0
V ⊗ V → C,
ie an element of
(V ⊗ V )∗ = V ∗ ⊗ V ∗ .
424–I 13–9
Thus the space of bilinear maps carries the representation (α∗ )2 of G. Hence the
invariant bilinear maps form a space of dimension
where
1 1
Q(u, v) = (F (u, v) + F (v, u)) , S(u, v) = (F (u, v) − F (v, u))
2 2
But it is easy to see that if F is invariant then so are Q and S. Since F is the only
invariant bilinear form on V , it follows that either
F = Q or F = S,
Proof I Every bilinear form has a unique expression as the sum of its symmetric
and skew-symmetric parts. In other words, the space of bilinear forms is the direct
sum of the spaces of symmetric and of skew-symmetric forms; say
V ∗ ⊗ V ∗ = V Q ⊕ V S.
(α∗ )2 = αQ + αS ,
R: If α is real then
I(1, αQ ) = 1 and I(1, αS ) = 0.
C: If α is complex then
H: If α is quaternionic then
To compute I(1, αQ ), choose a basis e1 , ..., en for V ; and let the corresponding
coordinates be x1 , ..., xn . Then the n(n + 1)/2 quadratic forms
x2i (1 ≤ i ≤ n), 2xi xj (1 ≤ i < j ≤ n)
form a basis for V Q . Let gij denote the matrix defined by α(g). Thus if v =
(x1 , ..., xn ) ∈ V , then the coordinates of gv are
X
(gv)i = gij xj .
j
Hence
g(x2i ) =
X
gij xj gik xk .
j,k
In particular, the coefficient of x2i in this (which is all we need to know for the
trace) is gii2 . Similarly, the coefficient of 2xi xj in g(2xi xj ) is
gii gjj + gij gji .
We conclude that
gii2 +
X X
χαQ (g) = (gii gjj + gij gji ).
i i,j:i<j
But
gii , χα (g 2 ) =
X X
χα (g) = gij gji .
i i,j
Thus
1
χαQ (g) = χα (g)2 + χα (g 2 ) .
2
Since
1 X
I(1, αQ ) = χ [2] (g),
|G| g∈G α
it follows that
1 X
2I(1, αQ ) = χα (g))2 + χα (g 2 ) .
|G| g∈G
But
1 X
χα (g)2 = I(α, α∗ ).
|G| g
Thus
1 X
2I(1, αQ ) = I(α, α∗ ) + χα (g 2 ).
|G| g
The result follows, since I(α, α∗ ) = 1 in the real and quaternionic cases, and 0 in
the complex case. J
Appendix A
The basic ideas of linear algebra carry over with the quaternions H (or in-
deed any skew-field) in place of R or C.
2. (q1 + q2 )w = q1 w + q2 w,
4. 1w = w.
The notions of basis and dimension (together with linear independence and
spanning) carry over without change. Thus e1 , . . . , en are said to be linearly inde-
pendent if
q1 e1 + · · · + qn en = 0 =⇒ q1 = · · · = qn = 0.
424–I 1–0