Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

Degree Project in Technology

First cycle, 15 credits

Clifford Algebra
A Unified Language for Geometric Operations

HENRIK HANSSON, LEO GORDIN

Stockholm, Sweden 2022


Contents

1 Clifford Algebra 1
1.1 Constructing the Clifford algebra . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Multivectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.1 K-blades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.2 Extending the inner product . . . . . . . . . . . . . . . . . . . . . 8
1.3 Interior products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4 Properties of the geometric product . . . . . . . . . . . . . . . . . . . . . 13
1.4.1 Grade and the geometric product . . . . . . . . . . . . . . . . . . 13
1.4.2 Clifford algebra and norm . . . . . . . . . . . . . . . . . . . . . . 14

2 Geometry 16
2.1 Clifford Algebra in two dimensions . . . . . . . . . . . . . . . . . . . . . 16
2.1.1 Complex numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.2 Rotations in the plane . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.3 Matrix representation . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Clifford Algebra in three dimensions . . . . . . . . . . . . . . . . . . . . . 17
2.2.1 Cross product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.2 Trivectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.3 Quaternions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.4 Matrix representation . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.5 Maxwell’s equations . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3 Projections and reflections . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3.1 Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3.2 Reflections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3.3 Reflecting bivectors . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4 Rotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.4.1 One-sided rotor rotations . . . . . . . . . . . . . . . . . . . . . . . 24

2
Abstract

In this paper the Clifford Algebra is introduced and proposed as an alternative to Gibbs’
vector algebra as a unifying language for geometric operations on vectors. Firstly, the
algebra is constructed using a quotient of the tensor algebra and then its most important
properties are proved, including how it enables division between vectors and how it is
connected to the exterior algebra. Further, the Clifford algebra is shown to naturally
embody the complex numbers and quaternions, whereupon its strength in describing
rotations is highlighted. Moreover, the wedge product, is shown as a way to generalize
the cross product and reveal the true nature of pseudovectors as bivectors. Lastly, we
show how replacing the cross product with the wedge product, within the Clifford algebra,
naturally leads to simplifying Maxwell’s equations to a single equation.

Sammanfattning

I detta examensarbete introduceras Clifford Algebran som ett alternativ till Gibbs vec-
toralgebra och som förenar flera olika geometriska ramverk till ett. Först konstrueras
algebran med hjälp av kvotrum över tensoralgebran varpå dess viktigaste egenskaper
som algebra senare bevisas, däribland hur produkten möjliggör division mellan vektorer
och hur algebran är besläktad med den yttre algebran. Vidare visas hur Cliffordalgebran
naturligt innehåller både de komplexa talen och quaterniornerna, varpå dess styrka att
beskriva rotationer belyses. Senare visas hur wedgeproukten kan generalisera krysspro-
dukten och avslöja pseudovektorernas sanna natur som bivektorer inuti Cliffordalgebran.
Sist visas hur byte av kryssprodukten mot wedgeprocukten naturligt leder till att simpli-
fiera Maxwells ekvationer till en enda rad.
Acknowledgments

We wish to express our gratitude to our supervisor, Erik Duse, for his patient guidance,
useful critique, and recommendations for our thesis.
Chapter 1

Clifford Algebra

1.1 Constructing the Clifford algebra


In vector algebra, there are generally two ways to take the product of two vectors: the
scalar product and the cross product. The scalar product is associative and distributive
but not invertible while the cross product is not even associative, only works in three1 , and
is not invertible as well. Now, suppose there exists a vector operation that is associative
and distributive but also invertible. If there existed such a product, we could for example
solve equations by dividing vectors. If we imagine such a product exists, what properties
would we require?
Take V to be an inner product space over field k with some inner product. We want to
have a operation on V for vectors u, v, w ∈ V such that

u(vw) = (uv)w, associativity


w(u + v) = wu + wv, left distributivity
(u + v)w = uw + vw, right distributivity

and one may also naturally require that vv = ⟨v, v⟩ such that vv = |v|2 to encode a
geometric magnitude to the product.
Recall that the tensor algebra of V naturally encompasses associativity and distributivity.
With a quotient algebra, the tensor product can be enforced to satisfy v ⊗ v = ⟨v, v⟩.
Consider the following construction:
Definition 1.1. Suppose that V is a vector space over the field K with a quadratic form
Q. Then the Clifford algebra associated with V is

Cl(V, Q) := T (V )/I(Q)
where

M
T (V ) = K ⊕ V ⊕ (V ⊗ V ) ⊕ ... = V ⊗i
i=0
1
It is actually also possible to define it in seven dimensions but in general it is not definable in
arbitrary dimensions.

1
is the tensor algebra associated with V , with V ⊗0 := K, and

I(Q) := (v ⊗ v − Q(v)1K ) = {w ⊗ (v ⊗ v − Q(v)1K ) ⊗ w′ : v ∈ V, w, w′ ∈ T (V )}

is the two-sided ideal generated by all elements of the form v ⊗ v − Q(v)1K for v ∈ V
where 1K is the multiplicative identity in K.

The name Clifford algebra is justified by the following theorem.

Theorem 1.2. The Clifford algebra Cl(V, Q) associated with the vector space V with
quadratic form Q is an algebra.

Proof. Since I(Q) = (v ⊗ v − Q(v)1K ) is an ideal, Cl(V, Q) = T (V )/I(Q) inherits the


structure of an algebra from T (V ). A proof of this fact is seen in page 430 of [1].

Moreover, the Clifford algebra is characterized by a universal property which motivates


one to talk about The Clifford algebra since it is unique up to isomorphism.

Theorem 1.3 (Universal property). Let A be an associative algebra with identity IA and
f : V → A be any linear map such that f (v)2 = Q(v)IA for all vectors v in a vector
space V with quadratic form Q. Moreover, suppose j : V → Cl(V, Q) is the canonical
inclusion map for the Clifford algebra such that j(v) = v. Then there exists a unique
algebra homomorphism ϕ : Cl(V, Q) → A such that f = ϕ ◦ j.

Proof. Since A is an associative algebra with identity, we can utilize the universal property
of the tensor algebra to find a unique homomorphism ψ : T (V ) → A such that f = ψ ◦ i
where i : V → T (V ) is the canonical inclusion mapping for the tensor algebra, i.e.
i(v) = v.
Define ϕ : T (V )/I(Q) → A by ϕ(x + I(Q)) = ψ(x) for all cosets x + I(Q) ∈ T (V )/I(Q)
where I(Q) is the ideal used in the construction of the Clifford algebra. Clearly, ϕ :
Cl(V, Q) → A. To show that this is well defined and does not depend on the choice of
representative element for the coset, consider x + I(Q) = y + I(Q), or in other words,
x + k = y for some k ∈ I(Q). Then, since

ψ(k) = ψ(p(i(v)2 − Q(v)1)q) = ψ(p)(ψ(i(v))2 − Q(v)1A )ψ(q) = 0

for some p, q ∈ T (V ), v ∈ V and we used the fact that ψ(i(v)) = f (v), we have that

ϕ(x + K) = ψ(x) = ψ(x) + ψ(k) = ψ(x + k) = ψ(y) = ϕ(y + K).

Moreover, ϕ is an algebra homomorphism since, for any cosets x + I(Q) and y + I(Q) we
have that

ϕ(x + I(Q))ϕ(y + I(Q)) = ψ(x)ψ(y) = ψ(xy) = ϕ(xy + I(Q))


= ϕ((x + I(Q))(y + I(Q))),

and

2
ϕ(x + I(Q)) + ϕ(y + (Q))) = ψ(x) + ψ(y) = ψ(x + y) = ϕ(x + y + I(Q))
= ϕ((x + I(Q)) + (y + I(Q))).

Now, if η : T (V ) → Cl(V, Q), x 7→ x + I(Q) is the canonical quotient homomorphism


then it is easy to see that ψ = ϕ ◦ η since x 7→ x + I(Q) 7→ ϕ(x + I(Q)) 7→ ψ(x). Thus,
ϕ is unique and also, f = ψ ◦ i = ϕ ◦ η ◦ i = ϕ ◦ j.

For two given elements w, w′ ∈ Cl(V, Q), we denote the induced quotient product between
them by
ww′
which we call the geometric product of w and w′ . By construction, for any element v ∈
Cl(V, Q) we have v 2 = Q(v). To get the desired property such that v 2 = ⟨v, v⟩, we simply
choose Q(v) = ⟨v, v⟩. This choice of Q will from here on be assumed unless otherwise
stated, and when V is specified to be an inner product space we shorten the notation
Cl(V, Q) to Cl(V ). This product is now by construction associative and distributive.
Theorem 1.4. Let u, v ∈ V when V is an inner product space over K with characteristic
other than 2 with inner product ⟨·, ·⟩, then uv + vu = 2⟨u, v⟩.

Proof. We have that

(u + v)(u + v) = uu + vv + uv + vu = ⟨u, u⟩ + ⟨v, v⟩ + uv + vu


but also,

(u + v)(u + v) = ⟨u + v, u + v⟩ = ⟨u, u⟩ + ⟨v, v⟩ + 2⟨u, v⟩


for which the proof is evident by cancellation.

The theorem entails that orthogonal vectors anti-commute. In particular, if we fix a


orthonormal basis {ei } for V , it means that ei ej + ej ei = 0 if i ̸= j. Coupled with the
fact that ei ei = ⟨ei , ei ⟩ = 1 and the associativity and distributivity of the product, we
know how to calculate the product of any pair of vectors.
The following example shows the importance of working with orthogonal bases when
computing the geometric product, as otherwise simplifying the products of basis vectors
becomes quite cumbersome.
Example 1.5. Let V = R3 , v = −4 + e1 + 2e2 and u = −3e1 − e2 + 5e3 where {e1 , e2 , e3 }
is an orthogonal basis of V and ⟨·, ·⟩ is an inner product in V . Then we have by the
distributive property

vu = (−4 + e1 + 2e2 )(−3e1 − e2 + 5e3 )


= 12e1 + 4e2 − 20e3 − 3e21 − e1 e2 + 5e1 e3 − 6e2 e1 − 2e22 + 10e2 e3
= −3e21 − 2e22 + 12e1 + 4e2 − 20e3 + 5e1 e2 + 5e1 e3 + 10e2 e3

and if the basis is orthonormal this simplifies to

−5 + 12e1 + 4e2 − 20e3 + 5e1 e2 + 5e1 e3 + 10e2 e3

3
We note that we above have produced a sum of not only vectors and scalars, but also
objects on the form ei ej which we will see later are not vectors but bivectors, elements
of the exterior algebra.
Moreover, for any non-zero vector v ∈ V , and if the inner product is Euclidean, we have
that

v 1 2 v2
v = v = 2 =1
⟨v, v⟩ ⟨v, v⟩ v
and, trivially,

v
v=1
⟨v, v⟩
which means that all non-zero vectors have an inverse in a Euclidean inner product space.
We conclude that the geometric product has the desired properties outlined above. Note
however that we don’t strictly require an inner product for the inverse to exist; if Q is
positive definite, the inverse of v ∈ V is, analogously, v −1 = Q(v)
v
.

Corollary 1.6. Let V be an inner product space and Cl(V ) its Clifford algebra. Then
for any collection of n elements {vi }ni=1 ∈ V ,

v1 ...vk−1 vk ...vn = −v1 ...vk vk−1 ...vn

when vk and vk−1 are orthogonal.

Proof. This follows immediately from vk−1 vk = −vk vk−1 by left multiplication of both
sides with v1 ...vk−2 and right multiplication with vk+1 ...vn .

Next, we will see that the Clifford algebras, like the tensor algebra, are graded. This
means that we can make sense of adding scalars, vectors, and other types of objects
embedded in the Clifford algebra as seen in Example 1.5.
Theorem 1.7. Clifford algebras are graded. In particular, we have that the Clifford
algebra Cl(V, Q) = T (V )/I(Q) associated with the inner product space V over K and
quadratic form Q can be expressed as

M
Cl(V, Q) = V ⊗i /Ii
i=0

where Ii consists of grade i elements of I(Q). That is, Ii := V ⊗i ∪ I(Q).

Proof. This follows from the fact that if T (V ) is a graded ring, and I(Q) a two-sided
homogeneous ideal in T (V ). We defer to chapter 1 of [3] for more details.

Next we will construct a basis of Clifford algebras.


Theorem 1.8. Let V be a vector space with dimension n over K and Cl(V, Q) an asso-
ciated Clifford algebra. If {e1 , ..., en } is a basis of V , then

{1K } ∪ {es1 · · · esk : s1 < · · · < sk , 1 ≤ k ≤ n}

4
where Sn is the symmetric group of order n, is a basis for Cl(V, Q). If V is infinite-
dimensional with basis {ei } this changes to

{1K } ∪ {es1 · · · esk : s1 < · · · < sk , k ∈ N}

In particular, in the finite-dimensional case, Cl(V, Q) has dimension 2n .

Proof. In the finite-dimensional case, consider the induced basis {1K } ∪ {es1 ⊗ · · · ⊗ esk :
1 ≤ k ≤ n} of T (V ). It suffices to find and remove the linearly dependent cosets to
construct a basis for Cl(V, Q).
Consider the set H := {si : i, si ∈ N, 1 ≤ i ≤ k} ⊂ N. Then if we look at any two
permutations σ, τ of H, then by corollary 1.6 eσ(1) · · · eσ(k) = ±eτ (1) · · · eτ (k) and thus are
linearly dependent. Therefore we can, for each H with k elements, choose

eσ(1) · · · eσ(k)

with σ(i) < σ(j) for i < j and remove the other linearly dependent elements, which gives
the remaining basis

{1K } ∪ {es1 · · · esk : s1 < · · · < sk , 1 ≤ k ≤ n}

as desired. Since there are nk ways to choose k numbers from a set of n elements without


regard to order, the total number of basis elements is


n  
X n
1+ = 2n .
k=1
k

For infinite dimension the proof is completely analogous.

Notice that for any given vector space V with dimension n, the dimension 2n of Cl(V, Q)
is the same as that of the exterior algebra ∧V , meaning that they are isomorphic as
vector spaces. In addition, the structure of the above basis is the exact same as that of
the standard basis of ∧V , which suggest an even deeper connection. This will allow us
to make sense of the higher graded elements of Clifford algebras.

Theorem 1.9. For the Clifford algebra Cl(V, Q) associated with the vector space V over
the field K and the quadratic form Q, Cl(V, Q) ∼ = ∧V as vector spaces. In particular, if
K has characteristic other than two and V is an inner product space with an orthogonal
basis, then there exists a canonical isomorphism between the two algebras independent of
basis. That is, the linear map determined by

φ : Cl(V, Q) → ∧V : es1 · · · esk → es1 ∧ · · · ∧ esk

for an orthogonal basis {ei } of V .

Proof. We begin by considering the case where K has characteristic other than two. In
this case, φ is quite clearly a bijective linear mapping between the two algebras.
P∞ Now

take two choices of orthogonal bases of V , {ei } and {ei }, as well as w = i=1 wi esi =
P ∞ ′ ′ ′ n
i=1 wi esi ∈ Cl(V, Q), where wi = wi = 0 for i > 2 if V has dimension n. It remains to
show that φ(e′s1 · · · e′sk ) = e′s1 ∧ · · · ∧ e′sk when φ is defined by

5
φ : es1 · · · esk → es1 ∧ · · · ∧ esk .

However, this follows immediately from linearity and the fact that each e′i is a linear
combination of ei ∈ {ei }.

Now for the case when Char(K) = 2, first consider a vector space V with finite dimension
n. Then as shown in Theorem 1.8, Cl(V, Q) has dimension 2n just as ∧V does. Thus
they are isomorphic as vector spaces. If V has infinite dimension however, the proof of
isomorphism gets quite complex and is omitted here.

Due to the canonical isomorphism established by the previous theorem we can view the
elements of Cl(V, Q) as multivectors. Furthermore, we can show that we can define
the wedge product for multivectors in the Clifford algebra, and thus view Cl(V, Q) as
an extension of ∧V with the extra structure of the geometric product provided by the
quadratic form Q.
Note that when Q = 0, vu + uv = 0, in which case the geometric product fulfills all of the
wedge product axioms and thus Cl(V, Q) ∼ = ∧V as algebras, not just vector spaces. Also,
since V always has an orthogonal basis when K has characteristic other than two, we can
always choose such a basis which gives ei ej +ej ei = 0, meaning that the geometric product
of basis multivectors in Cl(V, Q) fulfills the axioms of the wedge product. Because of this
we can consider the bases of Cl(V, Q) and ∧V the same up to algebra isomorphism instead
of just vector space isomorphism, meaning that they are completely equivalent.
Orthogonal decomposition also shows that vu = (v⊥ + v∥ )u = v⊥ u + v∥ u where v∥ ∥ u
and v⊥ ⊥ u. v⊥ u = 0 when v ∥ u and v∥ u = 0 when v ⊥ u, and thus we are tempted to
define vu = v ∧ u + ⟨v, u⟩ for v, u ∈ V , since we already consider the Clifford algebra an
extension of the exterior algebra.
The above is indeed possible, and will be done by decomposing the geometric product
into the symmetric and antisymmetric parts,

1
(ww′ + w′ w)
2
and

1
(ww′ − w′ w)
2
where we assume the field characteristic to be other than two. This decomposition is well
defined since the geometric product is associative and distributive.
Theorem 1.10. Let V be a vector space over K with characteristic other than two with
quadratic form Q. Then the antisymmetric part of the geometric product 12 (vu − uv) in
Cl(V, Q) is equivalent to the wedge product v ∧ u for v, u ∈ V .

Proof. It suffices to show that 12 (vu − uv) behaves like the wedge product of vectors.
That is, bilinearity and the identity v ∧ v = 0. Bilinearity is directly inherited from the
geometric product, and we clearly see that 21 (vv − vv) = 0. By the uniqueness of the
wedge product up to isomorphism we are thus done.

6
By the above theorem we can define v ∧ u := 12 (vu − uv) for v, u ∈ V .
Theorem 1.11. For the Clifford algebra Cl(V ) associated with the inner product space
V , the symmetric part of the geometric product 12 (vu + uv) for v, u ∈ V , is equal to the
inner product ⟨v, u⟩ if it is real.

Proof. The identity ∥v + u∥2 = ∥v∥2 + 2ℜ⟨v, u⟩ + ∥u∥2 gives

1 1 1
(vu + uv) = ((v + u)2 − v 2 − u2 ) = (∥v + u∥2 − ∥v∥2 − ∥u∥2 )
2 2 2
1
= (∥v∥ + 2ℜ⟨v, u⟩ + ∥u∥ − ∥v∥ − ∥u∥2 ) = ℜ⟨v, u⟩
2 2 2
2
which is equal to ⟨v, u⟩ for any v, u if and only the inner product is real.

Just as for the wedge product we can thus define ⟨v, u⟩ := 21 (vu + uv) for v, u ∈ V .
Corollary 1.12. For a real inner product space V with inner product ⟨·, ·⟩, the geometric
product in the Clifford algebra Cl(V, Q) for v, u ∈ V can be expressed as vu = ⟨v, u⟩+v∧u.
Example 1.13. Let V = R3 , v = 2e1 + e2 and u = −e1 + 4e2 − 3e3 where {e1 , e2 , e3 } is
an orthogonal basis of V and ⟨·, ·⟩ is an inner product in V . Then by corollary 1.12
vu = ⟨v, u⟩ + v ∧ u = 2 + 9e1 ∧ e2 − 6e1 ∧ e3 − 3e2 ∧ e3 = 2 + 9e1 e2 − 6e1 e3 − 3e2 e3
which is easily seen to agree with the result gotten from normal calculation.

The direct relationship between the geometric product and the wedge product can be
generalized further, which we will do in the next section about multivectors.

1.2 Multivectors
In this section we will expand upon the concept of multivectors, providing some intuition
for their physical meaning, as well as providing some necessary definitions and general-
izations which are necessary for connecting the Clifford and exterior algebras.
First we define the pure graded elements of the Clifford algebra.

1.2.1 K-blades
Moving forward we will always assume that any given vector space V has dimension
greater than any exterior powers taken.
Definition 1.14. Let V be a vector space. Any multivector w ∈ ∧k V , that is, any
multivector of the form w = v1 ∧ · · · ∧ vk for v1 , ..., vk ∈ V , is called a k-blade, or blade
for short.

Note that a 0-blade is a scalar and 1-blade is a traditional vector, which they will be
referred as henceforth. A 2-blade and 3-blade is also generally called a bivector and
trivector respectively. k-blades may in addition be called k-vectors.
The gradedness of the algebra allows us to define the graded projection.

7
Pn 1.15. Let V be a ivector space and w ∈ ∧V . We define the graded projection
Definition
of w = i=0 wi where wi ∈ ∧ V to grade k as

⟨w⟩k := wk

We see that the graded projection is clearly linear.


Theorem 1.16. Let V be a vector space. The map

φ : span{v1 , ..., vk } → span{v1 ∧ · · · ∧ vk }

is a 1 − 1 correspondence between lines through the origin in ∧k V and subspaces of V


with dimension k.

Proof. Consider another basis {v1 , ..., vk′ } of the n-dimensional subspace

L := span{v1 , ..., vk } ⊆ V.
We can express

v1 ∧ · · · ∧ vk = det A(v1′ ∧ · · · ∧ vk′ )

where A is the change-of-basis matrix, and thus since the k-blades are parallel, their
spans are equal. This means that the basis is irrelevant. In the other direction, consider
v1′′ ∧ · · · ∧ vk′′ ∈ span{v1 ∧ · · · ∧ vk } for a basis {v1′′ , ..., vk′′ } of some k-dimensional subspace
L′ ⊆ V . Then

v1 ∧ · · · ∧ vk = det A′ (v1′′ ∧ · · · ∧ vk′′ )

where A′ is the change of basis matrix, and for all 1 ≤ j ≤ k we have

v1 ∧ · · · ∧ vk ∧ vj′′ = det A′ (v1′′ ∧ · · · ∧ vk′′ ∧ vj′′ ) = 0

since {v1′′ , ..., vk′′ , vj′′ } is linearly dependent, but this also means that {v1 , ..., vk , vj′′ } is lin-
early dependent. Thus vj′′ ∈ L ⇒ L′ = L.

The above theorem is especially illuminating about the nature of k-blades, that is, a k-
blade represents a k-volume. Notably, just as a vector can be seen to represent a line with
direction, a bivector represents an area with direction, a 3-vector represents a directed
volume and so on.

1.2.2 Extending the inner product


Since the norm of a 1-vector is considered to be its length, we want to construct an inner
product for multivectors in ∧V representing the area of a 2-vector, volume of a 3-vector
and so on. This inner product must clearly depend on the choice of inner product in V .
V be an inner product space with inner product ⟨·, ·⟩ and corre-
Definition 1.17. Let p
sponding norm ∥ · ∥ = ⟨·, ·⟩. Then the inner product is called Euclidean if

∥v∥2 > 0

for all nonzero v ∈ V .

8
Henceforth V ∗ will always denote the dual space of a vector space V .
Definition 1.18. Let V be a vector space over the field K. Then by the bilinear universal
property of the exterior algebra there exists a unique bilinear map

∧n V ∗ × ∧n V → K : (w∗ , w) → ⟨w∗ , w⟩

such that, for the multilinear map

⟨v1∗ , v1 ⟩ · · · ⟨v1∗ , vn ⟩
φ(v1∗ , ..., vn∗ , v1 , ..., vn ) := .. .. ..
. . .
∗ ∗
⟨vn , v1 ⟩ · · · ⟨vn , vn ⟩

where vi∗ ∈ V ∗ and vi ∈ V for all 1 ≤ i ≤ n, φ(v1∗ , ..., vn∗ , v1 , ..., vn ) = ⟨v1∗ ∧ · · · ∧ vn∗ , v1 ∧
· · · ∧ vn ⟩. Let ⟨x, y⟩ := xy where x, y ∈ K. Now to include non-blade multivectors, define
* n n
+ n
X X X

wi , wi := ⟨wi∗ , wi ⟩
i=0 i=0 i=0

where wi∗ ∈ ∧i V ∗ and wi ∈ ∧i V .

Note that the extended duality reduces to the normal one for vectors. When V is an
inner product space, it remains to show that the above bilinear map indeed is an inner
product.
Theorem 1.19. Let V be an inner product space over the field K with inner product
⟨·, ·⟩. Then ⟨w, w′ ⟩ for w, w′ ∈ ∧V as defined in Definition 1.18 is an inner product in
∧V . Furthermore, if ⟨·, ·⟩ is Euclidean in V then it is also Euclidean in ∧V .

Proof. We first need to show that the 3 inner product axioms are fulfilled. Let a, b ∈ K
and w, w′ , w′′ ∈ ∧V . Then the axioms can be expressed as:

⟨w, w′ ⟩ = ⟨w′ , w⟩ (1.1)


⟨aw + bw′ , w′′ ⟩ = a⟨w, w′′ ⟩ + b⟨w′ , w′′ ⟩ (1.2)
⟨w, w′ ⟩ = 0 for all w if and only if w′ = 0 and vice versa (1.3)

(1) follows from the symmetry of the inner product in V since


* n n
+ n n ⟨vi1 , vi1 ⟩ · · · ⟨vi1 , vii′ ⟩
⟨w, w′ ⟩ =
X
wi ,
X
wi′ =
X
⟨wi , wi′ ⟩ =
X .. .. .. + w0 w0′
. . .
i=0 i=0 i=0 i=1 ⟨v , v ′ ⟩ · · · ⟨vii , vii′ ⟩
ii i1
′ ′
n ⟨vi1 , vi1 ⟩ · · · ⟨vi1 , vii ⟩
=
X .
. . . .. + w0′ w0 = ⟨w′ , w⟩
. . .
i=1 ⟨v ′ , v ⟩ · · · ⟨vii′ , vii ⟩
ii i1

where wi = vi1 ∧ · · · ∧ vii . (2) follows from the fact that the multivector inner product by
definition is bilinear for blades, and thus

9
n
X n
X
⟨aw + bw′ , w′′ ⟩ = ⟨awi + bwi′ , wi′′ ⟩ = (a⟨wi , wi′′ ⟩ + b⟨wi′ , wi′′ ⟩)
i=0 i=0
Xn n
X
=a ⟨wi , wi′′ ⟩ + b ⟨wi′ , wi′′ ⟩
i=0 i=0
′′ ′ ′′
= a⟨w, w ⟩ + b⟨w , w ⟩.

For (3), by linearity it suffices to consider the blade w = v1 ∧ · · · ∧ vn . Also, since φ from
Definition 1.18 is multilinear, we can set n = 1 without loss of generality, which reduces
the inner product to the normal one for vectors. Thus (3) is proven since it’s known to
hold for vectors.
Lastly, assume the inner product is Euclidean in V . We need to show that

∥w∥2 > 0 for w ̸= 0 and ⟨0, 0⟩ = 0

but this follows immediately from linearity in the same manner as (3).
p
With the above proof we now have a norm of multivectors, ∥w∥ := ⟨w, w⟩ with ⟨·, ·⟩ as
defined in Definition 1.18. Lastly we will show that this norm agrees with the existing
area formula for bivectors, justifying our definition.
Theorem 1.20. Let V be an Euclidean inner product space with the scalar product as
its inner product. Then for v, u ∈ V

∥v ∧ u∥ = ∥v∥∥u∥ sin θ
 
⟨u,v⟩
where θ is the angle between v and u, that is, θ = arccos ∥v∥∥u∥
.

Proof. The identity ∥v · u∥ = ∥v∥∥u∥ cos θ gives

⟨v, v⟩ ⟨v, u⟩
∥v ∧ u∥2 = ⟨v ∧ u, u ∧ u⟩ =
⟨u, v⟩ ⟨u, u⟩
= ∥v∥2 ∥u∥2 − ⟨u, v⟩2
= ∥v∥2 ∥u∥2 − ∥v∥2 ∥u∥2 cos2 θ
= ∥v∥2 ∥u∥2 (1 − cos2 θ) = ∥v∥2 ∥u∥2 sin2 θ

and thus ∥v ∧ u∥ = ∥v∥∥u∥ sin θ.

1.3 Interior products


We showed that the geometric product between any vectors u, v fulfills

uv = ⟨u, v⟩ + u ∧ v.

10
This decomposition is the sum of a grade lowering term and grade increasing term.
However, this only works for vectors. Is there a way to generalize a decomposition of grade
lowering and grade increasing terms for the product between any two multivectors? We
know that the wedge product is always, by definition, grade increasing. Thus a natural
approach is to attempt to define a grade lowering operation by taking the adjoint of the
wedge product as follows:

Definition 1.21. Let V be a vector space. Then for w0∗ ∈ V ∗ , w ∈ ∧V we define left
interior multiplication w → w0∗ ⌟ w as the adjoint operation to the map V ∗ → ∧V ∗ : w∗ →
w0∗ ∧ w∗ and right interior multiplication w → w ⌞ w0∗ as the adjoint operation to the map
V ∗ → ∧V ∗ : w∗ → w∗ ∧ w0∗ . That is,

⟨w0∗ ∧ w∗ , w⟩ = ⟨w∗ , w0∗ ⌟ w⟩


⟨w∗ ∧ w0∗ , w⟩ = ⟨w∗ , w ⌞ w0∗ ⟩

Both interior multiplications are well-defined by the guaranteed existence uniqueness of


adjoint operators.

To assist with calculation of interior products, we will determine their behavior on basis
multivectors.

Theorem 1.22. Let V be a vector space. Then for the dual bases {e∗i } and {ei },
(
ε(s, t \ s)et\s , s ⊂ t
e∗s ⌟ et =
0, s ̸⊂ t
(
ε(t \ s, s)et\s , s ⊂ t
et ⌞ e∗s =
0, s ̸⊂ t

where es = es1 ∧ · · · ∧ esk for s = {s1 , ..., sk } and we define ε(s, t) by es ∧ et = ε(s, t)es∪t
such that s ∪ t is ordered. That is, ε(s, t) := (−1)|{(si ,tj )∈s×t:si >tj }| .

Proof. Fix e∗s and et . We then have


(
ε(s, s′ ), t = s ∪ s′
⟨e∗s′ , e∗s ⌟ et ⟩ = ⟨e∗s ∧ e∗s′ , et ⟩ = ⟨ε(s, s′ )e∗s∪s′ , et ⟩ =
0, t≠ s ∪ s′

We see that the above expression is 0 if s ̸⊂ t. In the other case note that the expression
is 0 unless s′ = t \ s, and thus ⟨e∗t\s , e∗s ⌟ et ⟩ = ε(s, t \ s). Since ⟨e∗r , er′ ⟩ = δrr′ , by linearity
we now have
(
ε(s, t \ s)et\s , s ⊂ t
e∗s ⌟ et =
0, s ̸⊂ t

et ⌞ e∗s is calculated analogously.

We note for the notation used above that e∅ := 1.


By linearity the above theorem allows us to calculate any interior product. Before moving
on we will show an example that should make make the method of calculation clear.

11
Example 1.23. Let V be a vector space with dimension 2 or higher. Then for the dual
bases {e∗i } and {ei }, linearity and theorem 1.22 gives
(3 + e∗1 − 2e∗2 ) ⌟ (4 + e2 + e1 ∧ e2 ) = 12 + 3e2 + 3e1 ∧ e2 + 0 + 0 + e2 − 2 + 2e1
= 10 + 2e1 + 4e2 + 3e1 ∧ e2
We now have all the tools we need to confirm the grade lowering property we seek. Indeed,
theorem 1.22 immediately indicates this as shown below.
Theorem 1.24. Let V be a vector space over the field K. Then for the blades w ∈
∧i V, w∗ ∈ ∧j V ∗ with i ≥ j we have
w∗ ⌟ w, w ⌞ w∗ ∈ ∧i−j V

Proof. This follows immediately from 1.22 by bilinearity since for w = w0 et and w∗ = w0′ e∗s
where s ⊆ t,
w∗ ⌟ w = w0 w0′ e∗s ⌟ et = w0 w0′ ε(s, t \ s)et\s ∈ ∧i−j V
since |t \ s| = |t| − |s| = i − j when s ⊆ t, and w∗ ⌟ w is calculated the same way.

Note from the above theorem that when w, w′ ∈ ∧k V , w ⌟ w′ , w ⌞ w′ ∈ K. This


immediately invokes the question of whether in this case w ⌟ w′ = w ⌞ w′ = ⟨w, w′ ⟩, and
this is indeed easily seen to be true.
Corollary 1.25. Let V be a vector space over the field K ⊆ R. Then for the i-blades
w ∈ ∧i V, w∗ ∈ ∧i V ∗ we have
w∗ ⌟ w = w ⌞ w∗ = ⟨w∗ , w⟩

Proof. With the same methodology as in the previous proof, we obtain the special case
w∗ ⌟ w = w ⌞ w′ = w0 w0′ = ⟨w, w′ ⟩

Lastly we will present an anticommutation relation, which will be useful when applying
the interior products in Clifford Algebra.
Theorem 1.26. Let V be a vector space. Then for v ∈ V, w ∈ ∧V v ∗ ∈ V ∗ we have that
⟨v ∗ , v⟩w = v ∗ ⌟ (v ∧ w) + v ∧ (v ∗ ⌟ w)

Proof. We will look at the case where v = e1 and v ∗ = e∗1 when {ei } and {e∗i } are dual
bases. In the induced basis for ∧V , we can write
w = w0 + e1 ∧ w0′ where w0 , w0′ ∈ ∧V0
where V = span{e1 } ⊕ V0 , that is, V0 is spanned by the basis of V but with e1 removed.
Then
v ∗ ⌟ (v ∧ w) = e∗1 ⌟ (e1 ∧ (w0 + e1 ∧ w0′ )) = e∗1 ⌟ (e1 ∧ w0 ) = w0
and
v ∧ (v ∗ ⌟ w) = e1 ∧ (e∗1 ⌟ (w0 + e1 ∧ w0′ )) = e1 ∧ w0
follows, where we used theorem 1.22 to eliminate terms. Thus we have
v ∗ ⌟ (v ∧ w) + v ∧ (v ∗ ⌟ w) = w = ⟨v ∗ , v⟩w
and the theorem follows from here by the bilinearity of the map (v, v ∗ ) → v ∗ ⌟ (v ∧ w) +
v ∧ (v ∗ ⌟ w) inherited from the wedge and interior products.

12
1.4 Properties of the geometric product

1.4.1 Grade and the geometric product


As alluded to previously, the geometric product can be expressed in terms of the wedge
product and interior product even when both arguments are not vectors. This however
requires an extension of their definitions within the Clifford algebra. Note that when
We below use operations from within the exterior algebra on multivectors in the Clifford
Algebra, we omit the mention of isomorphism for brevity.
Theorem 1.27. In the Clifford algebra Cl(V, Q) associated with the Euclidean vector
space V over K with quadratic form Q, we have

⟨ww′ ⟩n−m = w ⌟ w′ ⟨w′ w⟩n−m = w′ ⌞ w

for the n-blade w and m-blade w′ with n > m.

Proof. By bilinearity we only need to consider how basis multivectors are transformed.
With the notation from the previous section, let w := es = es1 ∧ · · · ∧ esn = esi · · · esn and
w′ := et = et1 ∧ · · · ∧ etm = eti · · · etm for the orthonormal basis {ei }. Then, if s ⊆ t,

⟨ww′ ⟩n−m = ⟨es et ⟩n−m = ⟨ε(s, t)et\s ⟩n−m = ⟨es ⌟ et ⟩n−m = es ⌟ et = w ⌟ w′ .

where we used our definition of ε(s, t), theorem 1.24, and that e2i = 1. On the other hand,
if s ̸⊆ t, then we see that the number of basis vector factors in es et is not equal to n − m,
so the projection ⟨es et ⟩n−m = 0.

⟨w′ w⟩n−m

is shown to agree with the right interior product analogously, and thus we are done.

The above theorem does actually hold more generally for any vector space V , but proving
this is omitted from this thesis as it is very cumbersome and not very useful for our
purposes.
Theorem 1.28. In the Clifford algebra Cl(V, Q) associated with the vector space V over
K with quadratic form Q,

⟨ww′ ⟩n+m = w ∧ w′

for the n-blade w and m-blade w′ .

Proof. We need to show that the above operation fulfills the axioms of the wedge product.
This time however, since w, w′ are multivectors, we also need to show associativity. With
this definition however, associativity gets inherited from the geometric product, and due
to the linearity of graded projection, bilinearity is also inherited. Lastly it remains to
show that
⟨w2 ⟩2n = 0.
By bilinearity, it suffices to show that ⟨(es1 · · · esn )2 ⟩2n = 0 where each esi is orthogonal,
but this follows immediately from the fact that reordering elements in the product gives
(es1 · · · esn )2 ∈ K and thus ⟨(es1 · · · esn )2 ⟩2n = 0.

13
The above theorem
Pm can by linearity be extended to mixed grade multivectors w =
P n ′ ′ ′ i
i=0 wi , w = i=0 wi with wi , wi ∈ ∧ V through

n X
X m n X
X m

w∧w = wi ∧ wj′ = ⟨w⟩ ∧ ⟨w′ ⟩
i=0 j=0 i=0 j=0

which is seen to inherit the properties of the wedge product, and reduce to

⟨ww′ ⟩n+m

for an n-blade w and m-blade w′ . This is identically confirmed for with theorem 1.27 for
interior products.
Now we can see that we indeed are able to form a decomposition of grade lowering and
grade increasing terms for the geometric product with multivectors.

Theorem 1.29. In the Clifford algebra Cl(V ) associated with the inner product space V
with the orthogonal basis {ei }, the geometric products vw and wv for v ∈ V , w ∈ Cl(V )
can be expressed as

vw = v ⌟ w + v ∧ w
wv = w ⌞ v + w ∧ v

when w is a k-blade.

P enough to consider the case where w = es1 ∧· · ·∧esk = es1 · · · esk .


Proof. By bilinearity it’s
By decomposing v = i vi ei we get
X
vw = vi ei es1 · · · esk .
i

For term i, if ei = esj ∈ {es1 , ..., esk } then vi ei es1 · · · esk = ±vi es1 · · · esj−1 esj+1 · · · esk ∈
∧k−1 V and otherwise vi ei es1 · · · esk ∈ ∧k+1 V . Thus

vw = ⟨vw⟩k−1 + ⟨vw⟩k+1 = vw = v ⌟ w + v ∧ w

by theorems 1.27 and 1.28. wv = w ⌞ v + w ∧ v is shown identically.

Similarly to the above theorem and corollary 1.12 we can develop a completely generalized
formula for the geometric product in terms of the inner, interior and wedge products. This
is not particularly useful however, and readers interested are referred to Proposition 3.1.9
in [4].

1.4.2 Clifford algebra and norm


Recall the identity
⟨v, u⟩ = ∥v∥∥u∥ cos θ.
where θ is the angle between v and u in the inner product space V . We also found the
complementary identity
∥v ∧ u∥ = ∥v∥∥u∥ sin θ

14
in the previous section, and together they immediately show that ∥v∥∥u∥2 = ∥v ∧ u∥2 +
⟨v, u⟩2 which may be recognized as one of the formulations of Lagrange’s identity. This
may prompt a consideration to what ∥vu∥2 evaluates to.

∥vu∥2 = ∥⟨v, u⟩ + v ∧ u∥2 = ∥⟨v, u⟩∥2 + 2ℜ⟨⟨v, u⟩, v ∧ u⟩ + ∥v ∧ u∥2


= ∥⟨v, u⟩∥2 + ∥v ∧ u∥2 = ∥v∥∥u∥2

which means that ∥vu∥ = ∥v∥∥u∥. The follow-up question now becomes if this can be
generalized for multivectors.

Theorem 1.30. For v ∈ V and w ∈ Cl(V ) where V is an inner product space, ∥vw∥ =
∥vw∥ = ∥v∥∥w∥.

Proof. Similarly to before we have

∥vw∥2 = ∥v ⌟ w∥2 + ∥v ∧ w∥2

since ⟨v ⌟w, v ∧w⟩ = 0. Furthermore, theorem 1.26 together with the definition of interior
products and the bilinearity of the inner product, give

∥v ∧ w∥2 = ⟨v ∧ w, v ∧ w⟩ = ⟨w, v ⌟ (v ∧ w)⟩ = ⟨w, ⟨v, v⟩w − v ∧ (v ⌟ w)⟩


= ∥v∥2 ⟨w, w⟩ − ⟨v ⌟ w, v ⌟ w⟩ = ∥v∥2 ∥w∥2 − ∥v ⌟ w∥2 = ∥v∥2 ∥w∥2 − ∥v ⌟ w∥2 .

Together both equations give ∥vw∥2 = ∥v∥2 ∥w∥2 as needed. ∥wv∥2 = ∥v∥2 ∥w∥2 is proved
identically with the right interior product.

15
Chapter 2

Geometry

2.1 Clifford Algebra in two dimensions

2.1.1 Complex numbers


Let us focus on the two dimensional plane of vectors, R2 . Along with its usual basis e1 , e2
we have that the Clifford algebra Cl(R2 ) has the basis {1, e1 , e2 , e12 }. An immediately
interesting property of this algebra is that

e212 = (e1 e2 )(e1 e2 ) = (e1 e2 )(−e2 e1 ) = −e1 (e2 e2 )e1 = −1

which lends itself well to renaming I := e12 . In fact, the even subalgebra R ⊕ R2 ∧ R2 is
closed under the geometric product, indeed, for two even multivectors of the subalgebra,
z = a + bI and z ′ = c + dI the geometric product is

zz ′ = (a + bI)(c + dI) = ac + bdI 2 + (ad + bc)I = (ac − bd) + (ad + bc)I

which is, as we expected, still in the even subalgebra and looks precisely like complex
multiplication. This is no coincidence.

Theorem 2.1. The even subalgebra of Cl(R2 ) is isomorphic to the complex numbers.

The proof is trivially seen with the isomorphism Cl(R2 ) → C : a + bI → a + bi.

2.1.2 Rotations in the plane


Moreover, there is a geometric relationship between the even multivectors of the algebra
and its vectors. Consider v = ae1 + be2 , then

vI = (ae1 + be2 )e1 e2 = ae2 − be1

which we recognize as a 90-degree rotation of v in the counter-clockwise direction. Addi-


tionally,

Iv = e1 e2 (ae1 + be2 ) = −ae2 + be1

16
is a 90-degree rotation in the clockwise direction; multiplying vectors with I reassembles
multiplying complex numbers with i. The even multivectors actually act as rotation
mappings for vectors in the algebra. To understand how this works, consider the fact
that, for u, v ∈ R2 ,

uv = u · v + u ∧ v = ∥u∥∥v∥(cos(θ) + sin(θ)I)
where θ is the angle between u and v. Take some vector w = ae1 + be2 and consider its
product with uv. The magnitudes of u and v will obviously just act as scalar multiplica-
tion on w whereby we consider the case where u2 = v 2 = 1:

w(uv) = (ae1 + be2 )(cos(θ) + sin(θ)I) = a(cos(θ)e1 + sin(θ)e2 ) + b(− sin(θ)e1 + cos(θ)e2 ).

We recognize the above as the matrix multiplication between w and the rotation matrix,
    
cos(θ) − sin(θ) a a cos(θ) − b sin(θ)
= .
sin(θ) cos(θ) b a sin(θ) + b cos(θ)

2.1.3 Matrix representation


The Clifford Algebra can be represented by matrix algebras. For the standard Clifford
Algebra of the plane with the standard orthonormal basis, it can be verified by calculation
that the following matrix representation is valid:
 
a + v2 v1 − b
w = a + v1 e1 + v2 e2 + bI 7→
v1 + b a − v2
One may also note that setting the vector part to zero gives the matrix
 
a −b
a + bI →
7
b a
which the reader may recognize as the standard way of representing complex numbers
by matrices and thus the connection to the rotation matrices is clear; for unit vectors
u, v ∈ V with the angle θ between them, that
 
cos(θ) − sin(θ)
uv = cos(θ) + sin(θ)I →
7 .
sin(θ) cos(θ)

2.2 Clifford Algebra in three dimensions


Taking V = R3 equipped with the scalar product, we can easily see that the standard
orthonormal basis for V is given by

{1, e1 , e2 , e3 , e12 , e13 , e23 , e123 }


which means that Cl(V ) is eight dimensional.

17
2.2.1 Cross product
Now, take u, v ∈ V . Then their wedge product can be written as

u ∧ v = (u1 e1 + u2 e2 + u3 e3 ) ∧ (v1 e1 + v2 e2 + v3 e3 )
= (u2 v3 − u3 v2 )e23 + (u1 v3 − u3 v1 )e13 + (u1 u2 − u2 u1 )e12

which the reader may recognize as quite similar to the cross product. In fact, one can
easily check that u ∧ v = e123 u × v by

e123 u × v = e123 ((u2 v3 − u3 v2 )e1 − (u1 v3 − u3 v1 )e2 + (u1 u2 − u2 u1 )e3 )


= (u2 v3 − u3 v2 )e23 + (u1 v3 − u3 v1 )e13 + (u1 u2 − u2 u1 )e12 = u ∧ v

where we used that

e123 e1 = e1 e2 e3 e1 = e2 e3 = e23
e123 e2 = e1 e2 e3 e2 = −e1 e3 = −e13
e123 e3 = e1 e2 e3 e3 = e1 e2 = e12 .

The reason the cross product works in three dimensions is because the plane represented
by the span of u and v can be represented by a normal vector. We see that multiplying
with e123 with a vector, returns the bivector representing the plane that the vector is
normal to. This motivates the following naming:

E1 := e123 e1 = e23 ,
E2 := e123 e2 = −e13 ,
E3 := e123 e3 = e12 ,

However, this is does not translate well to arbitrary dimensions. For example, in two
dimensions, there is no way to even construct a non-zero vector perpendicular to two
other non-zero vectors. Moreover, vectors produced using the cross product are usually
called pseudovectors since they do not obey the tensor transformation rules. In light of
the above, pseudovectors are actually bivectors in disguise and the cross product obscures
this fact. On the other hand, the wedge product generalizes well to any dimension and
does not hide the fact that it generates bivectors. The wedge product will thus be
preferred from here on to the cross product. Later, we will show how abandoning the
cross product in favor of the wedge product in many physics equations greatly increases
clarity, intuition, and conciseness.

2.2.2 Trivectors
If one combines the three basis vectors into the trivector e123 one gets an oriented volume.
The trivectors of Cl(R3 ) are one dimensional and thus act much like scalars does. In

18
fact, trivectors are analogous with pseudoscalars and much like how pseudovectors are
disguised as bivectors, pseudoscalars are disguised trivectors in R3 .
For example, the triple product in vector algebra produces pseudoscalars. Take u, v, w ∈
R3 , then

u ∧ v ∧ w = u ∧ ((v2 w3 − v3 w2 )e23 + (v1 w3 − v3 w1 )e13 + (v1 w2 − v2 w1 )e12 )


= u1 (v2 w3 − v3 w2 )e123 − u2 (v1 w3 − v3 w1 )e123 + u3 (v1 w2 − v2 w1 )e123
= e123 (u1 (v2 w3 − v3 w2 ) − u2 (v1 w3 − v3 w1 ) + u3 (v1 w2 − v2 w1 ))
= e123 (u · ((v2 w3 − v3 w2 )e1 − (v1 w3 − v3 w1 )e2 + (v1 w2 − v2 w1 )e3 ))
= e123 (u · (v × w)).

2.2.3 Quaternions
Furthermore, one can see that the following relations hold:

E12 = e223 = e2 e3 e2 e3 = −e2 e2 e3 e3 = −1


E22 = (−e13 )2 = e1 e3 e1 e3 = −e1 e1 e3 e3 = −1
E32 = e212 = e1 e2 e1 e2 = −e1 e1 e2 e2 = −1
E1 E2 E3 = e23 e13 e12 = −e2 e3 e1 e3 e1 e2 = 1

and a renaming such that

I := −E1 , J := −E2 , K := −E3

gives that

I 2 = J 2 = K 2 = IJK = −1

which are precisely the defining characteristics for quaternion multiplication. Thus, due
to the geometric product’s associativity and distributivity, it is easy to see the following:

Theorem 2.2. The even subalgebra of Cl(R3 ) is isomorphic to the quaternions.

The proof follows by the isomorphism ϕ : a + bI + cJ + dK 7→ a + bi + cj + dk.


Quaternions are famous for representing rotations in three dimensional space well and its
embedding in the Clifford Algebra in three dimensions is indicative of the Clifford Algebra
also handling rotations in three dimensions as well. In fact, in section 2.4, we show how
Clifford Algebra generalizes the rotations of the Quaternions to arbitrary dimensions.

2.2.4 Matrix representation


For the three dimensional Clifford Algebra, the algebra can also represented by two
dimensional matrices, but instead over the complex numbers instead.

19
a + v1 e1 + v2 e2 + v3 e3 + b1 E1 + b2 E2 + b3 E3 + ce123
maps to
 
a + v2 + i(c + b2 ) v1 − b3 + i(b1 + v3 )
,
v1 + b3 + i(b1 − v3 ) a − v2 + i(c − b2 )
which also is straight forward to verify by direct calculation, albeit quite tedious.

2.2.5 Maxwell’s equations


Maxwell’s equations are the foundation of classical electrodynamics, whose common dif-
ferential form is
ρ
∇·E=
ε0
∇·B=0
∂B
∇×E=−
∂t
1 ∂E
∇ × B = µ0 J + 2
c ∂t
where E, B, J, ρ are the electric field, magnetic field, current density and charge density
respectively. µ0 , ε0 , c are constants, c being the speed of light. We aim to consolidate
these four equations into one.
A natural starting point is to combine the fields into one quantity , halving the number
of unknown functions. In the Clifford algebra Cl(R3 ) we define the multivector
F := E + cIB
where I := e123 is the unit pseudoscalar. The c factor is there so the units agree between
terms, and the factor I in cIB converts it into a bivector since B is a pseudovector, which
as suggested in the previous section is a natural choice. If we were to define the magnetic
field with the wedge product instead of the cross product, that is, as B := IB, we would
instead obtain the nicer definition F := E + cB of F .
To proceed, we realize that within the Clifford algebra, taking the gradient of a vector
makes sense since vectors can be multiplied. Thus we can obtain
∇F = ∇(E + cIB) = ∇ · E + ∇ ∧ E + (∂1 e1 + ∂2 e2 + ∂3 e3 )cIB
ρ
= + I∇ × E + c(∂1 e2 3 − ∂2 e1 3 + ∂3 e1 2)B
ε0
ρ ∂B ρ ∂B 1 ∂E
= −I + c(I∇ · B − ∇ × B) = −I − cµ0 J −
ε0 ∂t ε0 ∂t c ∂t
where Maxwell’s equations and theorem 1.12 were used to obtain F in terms of J and
time derivatives of E and B. The operator ∇∧ above is defined analogously to the curl
operator, but a as a bivector in the expected manner. The natural next step is then to
calculate 1c ∂F
∂t
where the factor 1c is there for unit correction. We get
1 ∂F 1 ∂E ∂B
= +I
c ∂t c ∂t ∂t

20
which combined with the result of ∇F gives the equation
 
1∂
+ ∇ F = µ0 c(cρ − J)
c ∂t

where we also used the fact that c2 = ε01µ0 to factor the right hand side. Thus we have
reduced Maxwell’s equation into a single equation.
In natural units, where c := 1, we could simplify it even further by defining the multivector

J := µ0 (ρ − J)

and the derivative


∇ := ∂i ei
where we use Einstein summation for 0 ≤ i ≤ 3 and e0 := 1, ∂0 := ∂t . This gives the
most compact equation
∇F = J.

2.3 Projections and reflections

2.3.1 Projections
Suppose u, v ∈ V and v is non-zero. Then, thanks to the invertibility of the geometric
product, we can write

u = uvv −1 = (⟨u, v⟩ + u ∧ v)v −1 = ⟨u, v⟩v −1 + (u ∧ v)v −1

where we recognize, from elementary linear algebra,

⟨u, v⟩
⟨u, v⟩v −1 = v = u∥
∥v∥2
as the projection of u onto the subspace spanned by v. Thus, it naturally follows that
(u ∧ v)v −1 = u⊥ is the orthogonal part of u. Hence, using the graded projection we can
confirm that u⊥ = (u ∧ v)v −1 by

⟨(u ∧ v)v −1 , v⟩ = ⟨(u ∧ v)v −1 v⟩0 = ⟨(u ∧ v)⟩0 = 0

where we used the fact that the zero-grade projection of the product between two vectors
is the inner product between them.

2.3.2 Reflections
Before we consider reflections, we take note of the following definition:
Definition 2.3. Let V be an inner product space. Then, a linear transformation T :
V → V is an isometry if it preserves the inner product. That is, for all u, v inV ,

⟨T (u), T (v)⟩ = ⟨u, v⟩.

21
A vector n represents a hyper plane by the orthogonal complement of its span and the
reflection over the hyper plane is a isometric linear transformation with the hyper plane
as the set of fixed points. Using the geometric product, the reflection can be written
concisely.
Theorem 2.4. If u, n ∈ V and n is non-zero, then u reflected over the hyper plane
represented by n is given by Rn (u) = −nun−1 .

Proof. Since vectors in the hyper plane constitute the set of fixed points of the transfor-
mation, we have that u⊥ must remain unchanged after the transformation. Along with
isometry, it follows that

Rn (u) = u⊥ − u∥
which can be rewritten by

u⊥ − u∥ = (u ∧ n)n−1 − ⟨u, n⟩n−1


= −(⟨u, n⟩ − (u ∧ n))n−1
= −(⟨n, u⟩ + (n ∧ u))n−1
= −nun−1 ,
and one can easily verify that the reflection applied twice is the identity transformation:

n2
−nu′ n−1 = −n(−nun−1 )n−1 = n2 u = u.
∥n∥4
The isometry of the transformation is confirmed by

⟨−nun−1 , −nvn−1 ⟩ = ⟨−nun−1 (−nvn−1 )⟩0 = ⟨(nu)(vn−1 )⟩0 = ⟨(vn−1 )(nu)⟩0


= ⟨vu⟩0 = ⟨u, v⟩.

2.3.3 Reflecting bivectors


Consider the bivector w = a ∧ b with a, b ∈ V . Intuitively, it would make sense to
define a reflection of w over the hyper plane represented by v by reflecting both a and b
independently. If we, for simplicity, assume n2 = 1, then we would get

1 1
(−nan) ∧ (−nbn) = (nannbn − nbnnan) = (nabn − nban)
2 2
1
= n(ab − ba)n = n(a ∧ b)n
2
= nwn.
which aligns well with our reflection formula for vectors except it dropped its sign.[2] This
result is quite peculiar when considering pseudovectors in R3 since pseudovectors are in
fact bivectors. This result sheds light on why pseudovectors break transformation rules;
they do not act like vectors under reflection.

22
2.4 Rotations
As we saw in the special case for two dimensions equipped with the standard inner
product, a vector is rotated by θ degrees if multiplied by the bivector nm = cos(θ)+sin(θ)I
where n and m are unit vectors separated by θ degrees and the ordering of multiplication
factors determines the direction of the rotation. This does not completely generalize to
arbitrary dimensions because if the dimension is greater than two, then a multiplication
between a vector and a bivector will yield a multivector with a trivector part.
To generalize rotations, we make use of Cartan-Dieudonné’s theorem.
Theorem 2.5 (Cartan-Diudonné). Let V be an inner product space. Then every iso-
metric transformation can be written as a composition of one or multiple hyper plane
reflections.

We omit the rather technical proof. The reader may consult Geometric Multivector
Analysis by Andreas Rosén for a comprehensive proof of the theorem.[4]
Now, rotations are clearly isometries, whereby we expect that a rotation can be written
as multiple hyper plane reflections. Suppose we use unit vectors n, m ∈ V as hyper plane
representations. Then, the composition applied to some vector v is

Rn ◦ Rm (v) = −n(−mvm−1 )n−1 = nmvmn.


Since the points orthogonal to m are invariant under Rm and points orthogonal to n are
invariant under Rn , it follows that if v is orthogonal to both m and n that Rn ◦Rm (v) = v.
In other words, all points v orthogonal to the plane spanned by m ∧ v is invariant under
the composite transformation.
Definition 2.6. If n, m ∈ V are unit vectors, then R = nm is called a rotor and R̂ = mn
is the reversion of R. The isometric transformation

Rn ◦ Rm (v) : v 7→ Rv R̂
is called a rotation.

We note that the reversion R̂ fulfills RR̂ = nmmn = nn = 1 and R̂R = mnnm = mm = 1
whereby R−1 = R̂. It follows that the inverse of a rotation v 7→ Rv R̂ must thereby be
v 7→ R̂vR. This of course means that the inverse rotation rotates in the opposite direction.
To show that this is a good definition of a rotation, we consider the special case when
V = Rn to show that this definition is aligned with our expectations.
Theorem 2.7. If R = nm is a rotor constructed by unit vectors n, m ∈ Rn separated by
theta degrees, then rotation using the rotor R rotates any vector v in the plane spanned
by n and m by 2θ.

Proof. Firstly we note that since v is in the span of n and m, there exist coefficients a
and b such that v = an + bm. Thus,

Rv = nm(an + bm) = anmn + bnmm = anmn + bn = anmn + bmmn


= (an + bm)mn = v R̂,

23
which multiplied on the left by R on both sides gives

Rv R̂ = R2 v.

If the angle between v and Rv R̂ is ϕ then the isometry of the rotation transformation
gives that

⟨Rv R̂, v⟩ = ∥v∥∥v∥ cos(ϕ) = v 2 cos(ϕ)


1
which means cos(ϕ) = v2
⟨Rv R̂, v⟩.
Since v and Rv R̂ are both vectors we can calculate their inner product using scalar graded
projection:

1 1 2 1 2 2 v2
cos(ϕ) = ⟨Rv R̂, v⟩ = ⟨R v, v⟩ = ⟨R v ⟩ 0 = ⟨(nm)2 ⟩0
v2 v2 v2 v2
= ⟨(n · m + n ∧ m)2 ⟩0 = ⟨(n · m)2 + 2(n · m)n ∧ m + (n ∧ m)2 ⟩0
= ⟨(n · m)2 ⟩0 + ⟨2(n · m)n ∧ m⟩0 + ⟨(n ∧ m)2 ⟩0
= (cos2 (θ) + 0 − sin2 (θ)) = cos(2θ).

2.4.1 One-sided rotor rotations


By construction, rotation transformations do not affect the component of the vector that
is orthogonal to the plane spanned n and m. If we decompose a vector v = v∥ + v⊥
into components parallel and orthogonal to the plane of rotation, then we know from the
proof of theorem 2.7 that

Rv R̂ = R(v∥ + v⊥ )RR̂ = R2 v∥ + v⊥ .

In the two dimensional case, there can never be a orthogonal component. This explains
why we were able to rotate vectors in R2 using one-sided rotors. Further, we note that
since the rotor is applied twice on the parallel component, clearly R2 v will rotate twice
as much as Rv would. This is why we could just use a rotor encoded with an angle of θ
degrees directly in the two dimensional case.

24
Bibliography

[1] Nicolas Bourbaki. Algebra 1. Springer, 1989.


[2] Chris Doran et al. Geometric algebra for physicists. Cambridge University Press,
2003.
[3] Irena Peeva. Graded rings and modules. Springer, 2011.
[4] Andreas Rosén. Geometric multivector analysis. Springer, 2019.

25
TRITA –

www.kth.se

You might also like