Professional Documents
Culture Documents
Non Cvomm Thesis
Non Cvomm Thesis
RINGS
in Mathematics
by
Sharon Paesachov
December 2012
The thesis of Sharon Paesachov is approved:
—————————————————— ————————
—————————————————— ————————
—————————————————— ————————
ii
Dedications
iii
ACKNOWLEDGEMENTS
I would like to take this time to thank all those who have made this work possible.
First, to the Chair of my thesis committee, Jerry Rosen, for providing support and
guidance. I appreciate all the time spent explaining, and sometimes re-explaining.
Second, Mary Rosen, for all your proof reading and editing.
I also want to thank all the students who have struggled with me. You are all like
family to me. Without you I just don’t know how I would have made it. Thanks Mike
Hubbard, Alex Sherbetjian, Shant Mahserejian, Greg Cossel, Spencer Gerhardt, and
to a couple who have moved on, Jessie Benavides and Chad Griffith.
Finally to my family. I want to thank my mother for all her love and support. I
would have never gone back to school without your encouragement. To Kazia, thank
you for putting up with all the late, late nights and all the ups and downs, I’m sure
it wasn’t easy.
iv
Table of Contents
Signature Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
Dedications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Wedderburn-Artin . . . . . . . . . . . . . . . . . . . . . . . . . . 25
v
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
vi
Abstract
RINGS
by
Sharon Paesachov
ring theory. Namely, we classify all semisimple Artinian rings in terms of matrices
with entries from division rings. In the second chapter we start with some natural
the multiplication. Further in the chapter we show what conditions a ring must meet
in order to have (or be imbedded in) a division ring of fractions. The remainder of
the second chapter is devoted to more constructions of division rings. In the final
chapter we move our focus to Simple rings. We give a construction of simple rings as
vii
INTRODUCTION
The primary goal of this thesis is to give methods for constructing two classes
of noncommutative rings called division rings and simple rings. A division ring is
a ring in which every non-zero element has a multiplicative inverse. The key thing
Sir William Hamilton. Hamilton was searching for a way to represent vecotrs in
space in an analogous manner to how vectors in the plane are represented by complex
numbers. It turns out such a construction is impossible, but in his failed attempt,
division ring as well as a four dimensional vector space over the real numbers. The
next example of a division ring was found in 1903 by Hilbert. Hilbert started with
the field of Laurent series F ((x)), over the field F = R(t) (the rational function field
over the real numbers. He then “skewed” the multiplication via the automorphism
which maps t to 2t. That is, the indeterminate x no longer commutes with the
coefficients. Instead, we define xt = 2tx. The resulting ring is denoted F ((s; σ)),
where σ(t) = 2t. Hilbert showed that F ((s; σ)) is a division ring, called the division
ring of skew Laurent series. One interesting facet of Hilbert’s division ring is that it
In the 1920s and 30s ring theorists developed structure theories for large classes of
noncommutative rings. It was discovered that, in some sense, division rings provide
the underpinnings for many important classes of noncommutative rings and various
division ring constructions were discovered. Some of these constructions are not
1
easily accessible to most people studying advanced math. However, skew Laurent
construction only requires a field and an automorphism. We will review the skew
Laurent series construction and we will determine the center in all cases (i.e. where the
automorphism has finite and infinite periods). We then show how these constructions
can be generalized to the case where the coefficient ring is simple (a ring R is simple
if its only two-sided ideals are (0) and R). That is, if R is simple and σ is an
automorphism of R, then we prove that the skew Laurent series ring R((x; σ)) is
We also show how certain classes on noncommutative domains (Ore domains) have
division rings of fractions and use this to give alternative methods for constructing
division rings. Finally, we construct simple rings by “skewing” the multiplication via
basic noncommutative ring theory so that the reader will get an idea how division
rings and simple rings figure into the basic structure theory of noncommutative rings.
2
Chapter 1: Classical Results in Noncommutative Ring Theory
Theory. We start with modules and these lead to our first example of a division
ring. Next, our discussion takes us into the exploration of the Jacobson radical of a
ring. We will try to find rings that have a “nice” Jacobson radical and, this will
allow us to define a semisimple ring and from there to go further and define a simple
ring. All of this serves two purposes: to prove the classical theorems of Wedderburn
and Artin which state that every simple Artinian ring is isomorphic to a matrix ring
over a division ring and that every semisimple Artinian ring is isomorphic to a finite
direct product of matrix rings over division rings. These theorems indicate the
Section 1: Modules
Definition 1.b: Alternatively, we say that the additive abelian group M is said to
(1) m(a + b) = ma + mb
(2) (m + n)a = ma + na
We remark that we are omitting the module axiom which states that m1 = m for all
3
m ∈ M where 1 is the unity of R. In this chapter, we are not assuming that our
rings contain a unity. In fact, the goal of several of our results is to conlude certain
Since it will be necessary to check that certain objects are submodules, we give the
submodule of M if
(1) N 6= ∅
With respect to the definition of faithful, we can now say that M is faithful if
A(M ) = (0)
R/A(M )-module.
M xr = (0)r = (0) ⇒ xr ∈ A(M ). Thus A(M ) is a right ideal. To see that A(M ) is
a left ideal, notice that M (rx) = (M r)x ⊂ M x = (0) ⇒ rx ∈ A(M ). Hence, A(M )
is a two-sided ideal of R.
4
r + A(M ) = r0 + A(M ) ⇒ r − r0 ∈ A(M )
= m(r1 + r2 )
= mr1 + mr2
The second and third axioms for modules follow just as easily.
(m + n)Ta = mTa + nTa = ma + na = mTa + nTa for all m, n ∈ M . From this point
on, E(M ) will denote the set of all endomorphisms of the additive group of M .
E(M ) is a ring using pointwise addition for the sum and composition for the
product.
5
Φ(ab) = Φ(a)Φ(b) since mTab = m(ab) = (ma)b = (mTa )Tb = m(Ta Tb ). Thus Φ is a
Thus we get that A(M ) = ker Φ. Since the image of R under Φ is a subring of
E(M ), then the First Isomorphism Theorem for rings gives our desired result.
Remark: C(M ) is clearly a subring of E(M ). Note that if φ ∈ C(M ), then for all
R-module endomorphisms of M .
need only show φ−1 ∈ E(M ) because then φTa = Ta φ will imply Ta φ−1 = φ−1 Ta
6
submodule of M and ker φ = M (since φ 6= 0); thus ker φ = (0) and φ is one-to-one.
This shows φ−1 exists in E(M ). Thus, every non-zero element of the commuting
ring is invertible
As we will never mention any other radical in this Thesis, we will refer to the
Jacobson radical as simply the radical. The radical helps us identify “bad” elements
in a ring and when we mod out by it, the ring we obtain is more amenable to
analysis.
Definition 9: For any ring R, the right Jacobson radical of R, denoted J(R), is
the set of all elements of R which annihilate all the irreducible right R-modules. If
Note that the left Jacobson radical is defined similarly. These two turn out to be
right ideal ρ of R. Moreover, ρ is regular. Conversely, for every such maximal right
7
Since S 6= M , we must have that S = (0). Equivalently, if m 6= 0 is in M , then
of the irreducible module R/ρ. Hence ρ0 /ρ = R/ρ which implies (by the
(2) If R has a unity (even a left unity), then all its right ideals are regular. To see
8
all x ∈ R (that is, in the definition of regular, simply take a = e).
right ideal ρ. In this case R must have a proper regular right ideal. As we stated, if
R has no irreducible modules, then J(R) = R. Since our main goal is to prove
J(R/J(R)) = (0), and this will trivially be the case if R = J(R), without loss of
(ρ : R) = {x ∈ R : Rx ⊂ ρ}.
We now prove a few theorems involving maximal regular right ideals. It turns out
T
Theorem 13:J(R) = (ρ : R) where ρ runs over all the maximal regular right
Proof: Let ρ be a maximal regular right ideal of R and let M = R/ρ. If x ∈ A(M ),
which implies A(M ) ⊂ (ρ : R). The other inclusion follows by going the opposite
9
Next we simplify the characterization of J(R) in terms of the maximal regular right
Proof: ρ regular implies there exists a ∈ R such that x − ax ∈ ρ for all x ∈ R. Let
regular. To see that ρ0 is maximal, suppose ρ0 ( J for some right ideal J. Then J
right-quasi-regular.
We remark that if an element a is both left and right-quasi-regular, then the left
T
Theorem 17: J(R) = ρ where ρ runs over all the maximal regular right ideals of
10
R. Furthermore, J(R) is a right-quasi-regular ideal of R and contains all the
T
Proof: “ ⊂ ” By Theorem 13 J(R) = (ρ : R) ⊂ ρ and since (ρ : R) ⊂ ρ we
T
conclude J(R) ⊂ ρ where ρ runs over all the maximal regular right ideals of R .
T
“ ⊃ ” Let T = ρ and let x ∈ T . Consider S = {xy + y : y ∈ R}. We claim that
R.
11
exists a00 ∈ J(R) such that a0 + a00 + a0 a00 = 0. Then a0 has a as a left-quasi-inverse
and a00 as a right-quasi-inverse. By our earlier remark we get, that a = a00 . We have
of R. Using the left analog of the above result, the right radical of R (as a
left-quasi-regular ideal) is contained in the left radical. Similarly the left radical of
R is contained in the right radical. We have shown now what we meant earlier in
the section when we stated that the left and right radicals are the same.
Another remark: In a commutative ring with unity, the radical is the intersection of
an = 0
Definition 19: A right (left, two-sided) ideal is nil if each element is nilpotent.
such that a1 · · · am = 0 for all a1 , ..., am ∈ ρ (that is, ρ is nilpotent if ρm = (0) for
some m ∈ N).
Note that every nilpotent ideal is nil. (It is possible to construct a nil ideal that is
not nilpotent.)
12
ρ ⊂ J(R).
The last result in this section and the question: if we factor out the radical of a
Proof: Let R = R/J(R) and let ρ be a maximal regular right ideal of R. Since
Definition 23: A ring is Artinian if any non-empty set of right ideals has a
minimal element.
Definition 24: A ring satisfies the descending chain condition (DCC) on right
then there is some n ∈ N such that ρn = ρk for all k ≥ n (that is, the descending
right ideals of R. If R is Artinian, then the set {ρi }i∈N of right ideals has a minimal
13
element, say ρm . Since ρm ⊃ ρm+1 ⊃ · · · and ρm is minimal, we must have that
ρm = ρk for all k ≥ m.
Conversely, suppose that R satisfies the DCC on right ideals. Let S be a nonempty
Note: Given a nonzero right ideal I of an Artinian ring, there is a minimal right
ideal contained in I.
(1) Let A be a ring which is an algebra over a field F (that is, the elements of F
commute with the elements of A). Then A is a vector space over F and the right
ideals of A are also subspaces (with the scalar action on the right). If A is finite
(2) We note that the polynomial ring K[t], with coefficients in a field K, is not
is a descending chain of ideals which never terminates. However, we claim that the
quotient ring K[t]/(tn ) is Artinian for all positive integers n. To see this, let I/(tn )
be an arbitrary ideal of K[t]/(tn ). Since K[t] is a PID, I = (f (t)) for some monic
14
polynomial f (t). Since (tn ) ⊂ I, it follows that f (t)|(tn ), and K[t] a UFD gives
f (t) = tj for some j ≤ n. Hence I/(tn ) = (tj )/(tn ) from which it follows that
K[t]/(tn ) is Artinian.
(3) We will show in Chapter 3 that if D is a division ring, then the matrix ring
Artinian.
Proof: Let J = J(R) and consider the descending chain of right ideals
this forces x = (0). Since J n 6= (0), it is a nonzero ideal of the Artinian ring R and
15
(J(R) is right-quasi-regular, so J(R) is a right-quasi-regular ideal of R, so
J(R) ⊂ J(R). Hence ρJ n = (0) and we have shown this implies ρ = (0), a
contradiction.
Corollary 27: If R is Artinian, then any (right, left, or two-sided) nil ideal of R is
nilpotent.
Proof: By Lemma 21, we showed that every nil right ideal of R is contained in
J(R) and if R is Artinian, J(R) is nilpotent. So, in this case, any nil ideal is a
nilpotent ideal.
nilpotent two-sided ideal. To see this let ρ 6= (0) be a nilpotent right ideal of R. If
some m ∈ N, then
nilpotent.
Lemma 29: Let R be a ring having no nonzero nilpotent ideals. Suppose ρ 6= (0) is
Proof: Let ρ 6= (0) be a minimal right ideal of R. Since R has no nonzero nilpotent
ideals, then by the preceding remark, ρ2 6= (0) and hence there is some x ∈ ρ such
16
xe2 = xe and so x(e2 − e) = 0. Let ρ0 = {a ∈ ρ : xa = 0}. Clearly ρ0 is a right ideal
ρ = eR
Lemma 30: Let R be a ring and suppose that for some a ∈ R a2 − a is nilpotent.
Then, either a is nilpotent or, for some q(x) ∈ Z[x], e = aq(a) is a nonzero
idempotent.
ak = ak+1 p(a)
= aak p(a)
= aak+1 p(a)p(a)
= ak+2 p(a)2
continuing we get
ak = a2k p(a)k
17
e = ak p(a)k = aq(a) for some q(x) ∈ Z[x].
Theorem 31: If R is Artinian and ρ 6= (0) is a right ideal that is not nilpotent,
Proof: By Theorem 26, R Artinian implies J(R) is a nilpotent ideal. Since ρ is not
Now let R = R/J(R). Since J(R) = (0), then there are no nonzero nilpotent ideals
in R. Let ρ = {a + J(R) : a ∈ ρ}; that is, ρ is the image of ρ in R. Also ρ 6= (0) and
contains and idempotent element e such that ρ0 = eR. Let a ∈ ρ map onto e; that
0 = ak = ak = ek = e,
a contradiction. By Lemma 30, there must exist a polynomial q(x) such that
We will now consider the ring eRe where e is an idempotent in R. This ring is
closely related to R.
J(eRe) = eJ(R)e.
18
Case 1: If M e 6= (0), then there exists m ∈ M such that me 6= 0 ∈ M e. Now
Thus, in both cases, we get that J(eRe) annihilates all irreducible R-modules and
“ ⊃: ” Let a ∈ eJ(R)e ⊂ J(R) (since J(R) is a two-sided ideal). Thus a has a right
and left quasi-inverse a0 . So a + a0 + aa0 = 0. Multiplying both sides on the left and
0 = a + ea0 e + eaa0 e
= a + ea0 e + (eae)(ea0 e)
= a + ea0 e + a(ea0 e)
This gives us that ea0 e is a right quasi-inverse of a. Since quasi-inverses are unique,
Theorem 33: Let R be a ring having no nonzero nilpotent ideals and suppose
19
division ring.
Proof: (⇒) Suppose eRe is a minimal right ideal of R. If eae 6= 0 ∈ eRe, then
(0) 6= earR ⊂ eR which implies eaeR = eR by the minimality of eR. Thus there is
show eae has a multiplicative inverse that works on both sides. Now
there exists x ∈ R such that (exe)(eae) = e. We can conclude eye = exe (because in
any ring with unity, if an element has a right multiplicative inverse and a left
multiplicative inverse, it is easy to show these must be equal) and eae is invertible.
nonzero right ideal of R. Then ρ0 e 6= (0), otherwise ρ20 ⊂ ρ0 ρ = ρ0 Er = (0), but this
Just interchanging left with right in the above Theorem. we immediately have:
Corollary 34: Let R be a ring having no nonzero nilpotent ideals and suppose
Example: We will show in Chapter 3 Section 3 (Theorem 13) that the only
20
two-sided ideals of Mn (F ) (where F is a field) are (0) and Mn (F ) (and we will say
So far we have been studying the radical of a ring. We now switch gears and focus
our interest on rings whose radicals are as trivial as possible; that is, we will study
Theorem 36: Let R be a semisimple Artinian ring (SSAR) and let ρ 6= (0) be a
Proof: Since J(R) = (0), by Lemma 21, R has non nonzero nilpotent ideals. Since
21
Since R Artinian, A has a minimal element; that is, there is some idempotent e0 ∈ ρ
If A(e0 ) 6= (0), then by Theorem 31, A(e0 ) must have an idempotent, say e1 . So
e0 e0 + e0 e1 − e0 e1 e0 + e1 e0 + e1 e1 − e1 e1 e0 − e1 e0 e0 − e1 e0 e1 + e1 e0 e1 e0 . Combining
this with the fact that e20 = e0 , e21 = e1 and e0 e1 = 0, we get that
Since A(e0 ) is minimal, we must have A(e0 ) = (0) which is impossible since
22
must be a left ideal of R. Also, since B ⊂ A we have B 2 ⊂ BA = (0) and hence B is
a = se = ae = se2 = se = a).
To see that e is in the center of R, let y ∈ R. Since ye ∈ A and e is a left unity for
A, then eye = ye. Since ey ∈ A and e is a right unity of A, then eye = ey. Hence
To see the other inclusion, suppose ρ is a maximal regular right ideal of R and let
ρA = A ∩ ρ.
R/ρ ∼
= (A + ρ)/ρ ∼
= A/A ∩ ρ = A/ρA
23
ρA is a maximal ideal of A. Now since ρ is regular, there is a k ∈ R such that
regular right ideal of A and therefore J(A) ⊂ ρA . This is true for all maximal
regular right ideals ρ of R such that A * ρ and certainly for those which do contain
also that A is semisimple. We claim that R = Re ⊕ R(1 − e); that is, R is the direct
sum of Re with R(1 − e) (called the Peirce decomposition of R relative to e). Note
that since 1 − e is also in the center of R, then R(1 − e) is an ideal of R. For any
Hence A ∼
= R/R(1 − e) as rings. Since A is the homomorphic image of the Artinian
24
ring R, then A must be Artinian.
Definition 42: A ring R is simple if R2 6= 0 and the only two sided ideals of R are
(0) and R.
Remarks: (1) A simple ring R with unity must be semisimple. Why? Since J(R) is
(2) A simple Artinian ring R must be semisimple. Why? R Artinian implies J(R) is
nilpotent and so, if J(R) = R, then R is nilpotent. R simple impies R2 6= (0) and
Before proving our next result, we first show in an Artinian ring, we cannot have an
A1 ⊕ A2 ⊕ · · · ⊃ A2 ⊕ A3 ⊕ · · · ⊃ A3 ⊕ A4 ⊕ · · ·
could write a as a finite sum of elements in An+1 ⊕ An+2 ⊕ · · · , but this violates the
fact that the sum An ⊕ An+1 ⊕ · · · is direct (specifically, the intersection of An with
25
Theorem 43: A semisimple Artinian ring is the direct sum of a finite number of
Proof: We first claim that we can choose a nonzero minimal ideal of R. We can
nontrivial ideal (hence a proper right ideal). Now R Artinian implies we can choose
and has a unity. Thus A2 6= (0) Suppose B 6= (0) is an ideal contained in A. Then
ABA is an ideal of R and is nonzero since A has a unity. Now ABA ⊂ A and by the
B = A. Thus A is simple.
By the argument given in the proof of Lemma 40, we can write R = A ⊕ T0 where
fashion, we get ideals A = A0 , A1 , A2 , ..., AK , ... of R that are all simple Artinian
Tk = (0) for some k; otherwise we would have an infinite direct sum of ideal of R, a
Section 5: Wedderburn-Artin
Artinian rings. This result has numerous application in noncommutative ring theory
26
and is of particular importance in the theory of group representations. We will see
that division rings play a fundamental role in the Wedderburn-Artin result and, in
R-module.
Since all our R-modules are right modules, then such a ring should really be called
right primitive.
Theorem 45: R is primitive if and only if there is a regular right ideal ρ ⊂ R such
M∼
= R/ρ for some minimal regular right ideal ρ . R faithful implies A(M ) = (0).
A(M ) = A(R/ρ)
= {x ∈ R : (R/ρ)x = (0)}
= {x ∈ R : (R/ρ)x = ρ}
= {x ∈ R : Rx ⊂ ρ}
= (ρ : R)
Thus (ρ : R) = (0). Conversely, let ρ be a maximal regular right ideal of R such that
primitive.
27
Now R primitive implies (ρ : R) = (0) for some maximal regular right ideal and
since J(R) = ∩(ρ : R), ρ running over all maximal regular right ideals of R, we have
J(R) = (0)
ideal (0), it must be Artinian and thus is a SSAR. By Corollary 38, 1 ∈ R. Finally,
it is easy to show that any commutative ring with unity, in which (0) is a maximal
Corollary 47: (a) Any simple, semisimple ring is primitive. (b) Any simple ring
28
This corollary demonstrates that, with a possible few bizarre exceptions, we can
Let V 6= 0 be a countably infinite dimensional vector space over any field (e.g.
Irreducible: Clearly (L, V ) 6= (0) since V 6= 0 and since there is some T ∈ L such
Ideal: Let T ∈ I, S ∈ L. Then clearly both T ◦ S and S ◦ T have finite rank. Thus
29
both are contained in I making I an ideal.
To see that this ideal is properly contained in L not that the identity map has
ring ∆ = C(M ), a division ring. We regard M as a right vector space over ∆ via
which are ∆-independent, and any w1 , ..., wn ∈ M , there exists r ∈ R such that
vi r = wi , i = 1, ..., n.
transformations on M over ∆.
but mr 6= 0.
To see this, we will assume that such an r ∈ R does exist. Then in that case we
have that mrR 6= (0) and by the irreducibility of M, mrR = M . Thus, we can find
independent over ∆ and w1 , ..., wn ∈ M . Let Vi be the linear span over ∆ of the vj
with i 6= j. Since vi ∈
/ Vi we can find a ti ∈ R with vi ti = wi , Vi ti = (0). If
30
m ∈ M, m ∈
/ V there exists and r ∈ R such that V r = (0) but mr 6= (0).
done.
an r ∈ A(V0 ) such that yr 6= 0. This means, if mA(V0 ) = (0), then m ∈ V0 . Not that
x ∈ M with x = wa, a ∈ A(V0 ), then xτ = ma. To see that τ is well defined note
xr = (wa)r = w(ar)
hence
ma = (wa)τ = (wτ )a
That is, (m − wτ )a = 0 for all a ∈ A(V0 ). By our induction hypothesis we get that
31
Density is a generalization of the key property of linear transformations. In fact, we
will sketch the proof that if R acts densely on M over ∆ and M is finite dimensional
over ∆, then R ∼
= Hom∆ (M, M ). To see this: for any a ∈ R recall the map
one-to-one. For onto, fix a ∆-basis {v1 , ..., vn } for M . If φ ∈ Hom∆ (M, M ) is
n
X n
X Xn
mφ = wi αi = vi rαi = ( vi αi )r = mr = mTr ,
i=1 i=1 i=1
With the Density Theorem proven, we can now prove the Wedderburn-Artin
R∼
= Mn (D), where D is a division ring.
implies J(R) = (0) or J(R) = R. Then R2 = R, together with the fact that J(R) is
32
J(R) = (0) and so R is semisimple and simple, thus it must be primitive.
density of R implies R ∼
= Mn (D) and we are done. Assume not and let {v1 , v2 , ...}
Vm = spanD {v1 , v2 , ...} and let ρm = {x ∈ R : Vm x = 90)}. The ρm ’s are right ideals
can choose r ∈ R such that Vm r = (0) and Vm−1 r = (0). This means that for all m
we can find r ∈ ρm such that r ∈ ρm+1 . Hence this chain of right ideals is never
theorem for these simple/semisimple Artinian rings and we see why division rings
are so important, as they are the building blocks for some very ubiquitous rings
R∼
= Mn1 (D1 ) ⊕ · · · ⊕ Mnk (Dk ) where the Di are division rings.
33
Chapter 2: Division Rings
In this chapter we will give two methods for constructing division rings. The first
method uses skew Laurent series. This construction just requires a field F and an
would hope that we could somehow embed R in a division ring of fractions. There
are noncommutative domains which don’t have division rings of fractions. However,
if R satisfies an additional condition, called the Ore condition, then R will have a
division ring.
One can turn the set of all power series, denoted R[[x]] and the set of all Laurent
series, denoted R((x)), into rings under the usual operations of addition and
this does not give us a useful construction because we to start with a division ring.
Let R be a ring and σ : R −→ R an automorphism of R. Let R[[x; σ]] denote the set
∞
ri xi where ri ∈ R. We make R[[x; σ]] into a ring as
P
of all expressions of the form
i=0
34
follows:
Addition:
∞
X ∞
X ∞
X
i i
ri x + si x = (ri + si )xi , for all ri , si ∈ R
i=0 i=0 i=0
i
P
Where ti = rj σ(si−j ).
j=0
Of course we have to check that R[[x; σ]] is in fact a ring under these operations.
Most of the ring axioms are straightforward to check. We remark that to check the
as desired.
Definition 2: R[[x; σ]] is called the ring of Skew power series over R.
∞
ri xi ∈ R[[x; σ]] is invertible if and only if r0
P
Proposition 3: The element p(x) =
i=0
is invertible in R.
35
Proof: Suppose r0 is an invertible element of R. Then we can write
where q(0) = 0.
1 − q(x) + q(x)2 − · · · .
q(0) = 0 implies that this expression is a skew power series (q(0) = 0 insures that
p(x) = r0 (1 + q(x)) is the product of two invertible elements in R[[x; σ]] and hence
invertible.
must be invertible.
We remark that the above construction would have worked if we had just assumed
In this case we have, for all r ∈ R, r = xσ −1 (r)x−1 which gives x−1 r = σ −1 (r)x−1
and so the skew multiplication is also valid for negative exponents. Let R((x; σ))
∞
ri xi , n ∈ Z, where addition is as
P
denote the set of all expressions of the form
i=n
usual and multiplication is skewed. One can check the ring axioms and we note that
above). We remark that R((x; σ)) can be obtained through localizing R[[x; σ]] of the
36
multiplicatively closed set {xn : n ∈ Z}, but we will not pursue this method.
Definition 4: R((x; σ)) is called the ring of Skew Laurent series over R.
∞
ri xi ∈ R((x; σ)) is invertible in R((x; σ)) if
P
Corollary 5: The element p(x) =
i=n
= rn xn (1 + q(x)),
R[[x; σ]] hence in R((x; σ)). Since rn xn is also invertible in R((x; σ)), the result
The result immediately yields a method for constructing division rings and we have
division ring.
Proof: By the preceding corollaries we know every nonzero element of F ((x; σ)) is
The center of a division ring is a field and we can view the division ring as a vector
space over its center. It is important to know the dimension of the division ring over
37
its center. We now turn to the problem of determining the center of a skew Laurent
series over a field and then determine its dimension over the center.
First we fix some notation. If σ ∈ Aut(F ), let Fσ denote its fixed field. That is,
Fσ = {a ∈ F : σ(a) = a}. The center will depend on the period of σ (i.e. the order
of σ as an element in the group Aut(F )). Note that if a ∈ Fσ , then for any n ∈ Z,
xn a = σ n (a)xn = axn and thus Fσ is always contained in the center of F ((x; σ)). If
Hence, if ord(σ) = n, then Fσ ((xn )) is contained in the center of F ((x; σ)) and
Fσ , if σ has infinite period
C=
Fσ ((xn ), if σ has finite period n
∞
ai xi ∈ C, then xp(x) = p(x)x gives
P
Proof: If p(x) =
i=n
∞
X ∞
X
i+1
σ(ai )x = ai xi+1
i=m i=m
automorphism.
38
proving C ⊂ Fσ . By the preceding discussion, we can conclude C = Fσ .
Case 2: σ has finite order n. Then σ i = 1 implies n|i, say i = nqi , some qi ∈ Z. Thus
∞ ∞
q
ai x i = ai xn i ∈ Fσ ((xn )). Hence, in this case, C = Fσ ((xn )).
P P
p(x) =
i=n i=n
In order to determine the dimension of F ((x; σ)) over its center we need a theorem
Artin’s Lemma: Let F be a field, let Fσ be the fixed field of σ, and let σ have
period n. Then
[F : Fσ ] = n
Theorem 8: If σ has finite order, say order n, then the dimension of F ((x; σ)) over
its center Fσ ((xn )) is n2 . Otherwise, the order of σ is infinite and the dimension of
Proof: If the period of σ is infinite then it is clear that the dimension of F ((x; σ))
is infinite over its center. Thus, assume that σ n = I, some n ∈ Z. First we show that
[F ((x; σ)) : F ((xn ))] = n. Consider S = {1, x2 , . . . , xn−1 }, we wish to show that S is
a basis for F ((x; σ)) over F ((xn )). By the division algorithm i = nti + r, where
0 ≤ ri < n and ti ∈ Z. Thus xi = (xn )ti xri which implies S spans F ((x; σ)) over
n−1 ∞
F ((xn )). Now assume hj xj = 0, where hj = aij xni , then
P P
j=0 i=0
∞
n−1
! ∞
n−1 X
X X X
ni j
0= aij x x = aij xni+j
j=0 i=0 j=0 i=0
Since j runs from 0 to n − 1 we get that the powers of x must all be distinct which
forces aij = 0. So that S is a basis and indeed [F ((x; σ)) : F ((xn ))] = n.
Then by Artin’s Lemma we get that [F : Fσ ] = n. If {c1 , ..., cn } is a basis for F over
Fσ then this set must also be a basis for F ((xn )) over Fσ ((xn )). Then by the tower
39
theorem [F ((x; σ)) : Fσ ] = [F ((x; σ)) : F ((xn ))][F ((xn )) : Fσ ((xn ))] = n2
(1) the first known example of a division ring is the ring of quaternions. The
H, is the four dimensional real vector space with R basis {1, i, j, k} having relations
i2 = j 2 = k 2 = −1
ij = k, jk = i, ki = j
ji = −k, kj = −i, ik = −j
and hence ai ∈ R, for all i. Also, for all u ∈ C, up = pu ⇒ uai = ai σ i (u). Hence, if
ai is not zero then σ 1 =identity on C and since σ has period two, we must have that
In the previous section we were able to construct division rings from the Laurent
and power series rings. Now we will show what conditions are required in order for a
satisfy the Ore condition (or be an Ore domain). As this is relatively easy to check
40
it becomes a great way of adding to our library of division rings.
φ(R).
Alternatively,
Definition 9b: We say that R has a Right Division Ring of Fractions if there
Definition 10: Let R be a ring. R is said to satisfy the right Ore condition if,
That is, for all a, b ∈ R× there exists a0 , b0 ∈ R such that ab0 = ba0 6= 0
Definition 11: Any domain R that satisfies the right ore condition will be know as
For the duration of this chapter we will only be dealing with right division ring of
fractions and right ore domains. Therefore, we will refer to each without the “right”.
Theorem 12: Let R be any ring. R has a division ring of fractions D if and only if
R is a Ore domain.
Proof: First suppose that R has a division ring of fractions D such that every
41
non commutative) then there exists r0 , s0 ∈ R× such that s−1 r = r0 s0−1 ⇒ rs0 = sr0 .
Conversely, since we want are elements of the form rs−1 , r, s ∈ R then we naturally
To show the transitive property holds suppose that (a, b) ∼ (a0 , b0 ) and that
au = a0 u0
a0 v = a00 v 0
bu = b0 u0 6= 0
b0 v = b00 v 0 6= 0
get
and similarly
42
aα = a00 β, bα = b00 β 6= 0 ⇒ (a, b) ∼ (a00 , b00 )
The equivalence class of (a, b) will be denoted a/b. To define a ring structure on
and multiplication by
Before we show that these operations are well-defined we will show that addition
does not depend on the choice of m and it will follow that multiplication does not
Note that a/b = ar/br = ar/m and c/d = cr0 /dr0 = m. If m0 = bs = ds0 where
= a/b + c/d
To show that addition is well-defined let a/b = a0 /b0 . Then there exists s, s0 , s00 such
43
that m0 = bs = ds = b0 s00 and as = a0 s00 . Then,
= a0 /b0 + c/d
To show that multiplication is well-defined we first note that a/b = ax/n and
c/d = n/dy. Again let a/b = a0 /b0 . Choose u, v such that bu = b0 v and au = a0 v. Let
(a/b)(c/d) = (au/bu)(cv/dv)
= aux/dvy
= a0 vx/dvy
= a0 x/dy
A similarly tedious argument will show that the ring laws do in fact hold. We will
denote this ring by D. Now a/b = 0 is and only if a/b = 0/b0 for all b0 ∈ R× if and
division ring.
44
Section 3: Noetherian Rings
In this section we prove that all Noetherian Rings satisfy the Ore Condition. This
will allow us to more easily recognize Ore Domains and provide substantial
result we define some terms and prove a result concerning Noetherian Rings.
The ascending chain condition (ACC) on right ideals; Given any ascending
An = Aj for all j ≥ n
Finitely Generated Right Ideal: Let A be a right ideal of R, then there exist
Definition 13: A Noetherian Ring is a ring that satisfies any of the preceding
three condition.
Proof: “(a) ⇒ (b)” Let S 6= ∅ be a collection of right ideals and suppose R satisfies
the ACC on right ideals. Let A1 ⊂ S. If A1 is not maximal then there is an A2 such
45
We continue this way until we reach an n ∈ N such that An = An+1 = · · · (because
“(b) ⇒ (c)” Let R satisfy the maximum condition on right ideals and assume there
“(c) ⇒ (a)” Suppose that every right ideal is finitely generated. Given an ascending
ideal. This gives us that ∪i≥1 Ai is finitely generated and thus there exists b1 , ...bn
such that ∪i≥1 Ai = b1 R + · · · bn R. Since each bi ∈ Aj for some j then there exists k
Since R is a right Noetherian ring it satisfies the ACC so that there exists n such
46
Since R is a domain then cancellation holds. Thus
ck 6= 0, b 6= 0 ⇒ aR ∩ bR 6= (0)
With this Theorem established we wish to expand our supply of division rings via
Noetherian rings. We will prove that if R is Noetherian then the set of all
R[x; σ]) is as well. As a consequence we will get R[x; σ] is an Ore Domain whenever
R is an Ore Domain (this is called the Hilbert Basis Theorem). However, before we
can this Theorem we must state the Modules version of Theorem 13. We only state
it because in the proof one needs only to replace the word ideal with the word
47
(a) The ACC on submodules
Theorem 17: Hilbert Basis Theorem: Let R be a right Noetherian Ring and
Proof: Let I 6= 0 be a right ideal of R[x; σ]. We will show that I is finitely
generated.
then J must be finitely generated, say J =< r1 , ..., rk > with ri 6= 0, for all i. Now
since ri ∈ J for i = 1, ..., k ⇒ there are pi (x) ∈ I, for i = 1, ..., k, such that ri are the
leading coefficients of the pi (x) and where the deg pi (x) = ni . Let
n = max{n1 , ..., nk }.
Now let N = R + Rx + · · · Rxn−1 . That is, N contains all the elements of R[x; σ]
This gives us that I ∩ N is finitely generated. Thus, we can let I ∩ N =< q1 , ..., qt > .
Let I0 =< p1 , ..., pk , q1 , ..., qt > be a right ideal of R[x; σ]. Clearly I0 ⊂ I ∩ N ⊂ I.
48
Now let p(x) inI such that deg p(x) < n. This gives us that p(x) ∈ I ∩ N ⇒ there
deg p(x) = m ≥ n and suppose that for all q(x) ∈ I with deg q(x) < m that
q(x) ∈ I0 . Also, we let be the leading coefficient of p(x). Then p(x) = rxm + · · · .
q = (p1 σ −n (a1 )+· · ·+pk σ −n (ak ))xm−n = [(r1 a1 xn +· · · )+· · ·+[(rk ak xn +· · · )]xm−n =
m ⇒ p − q ∈ I0 ⇒ p ∈ I0 .
Theorem 18: Let R be a right Ore Domain and σ ∈ Aut(R). Then R[x; σ] is a
In this section we will define the skew polynomial ring R[x; σ, δ] and show that for
any division ring K, K[x; σ, δ] has a division ring of fractions. We then widen the
restrictions by allowing R to be an ore domain and we will not only show that
R[x; σ, δ] has a division ring of fractions but we will explicitly find it. To prove the
When dealing with polynomial rings it is important that the degree of the
49
polynomials are preserved via commutation and multiplication. Let R be a ring and
let R < x > be the additive group freely generated over R by x. The elements of
xr = xσ(r). Defining σ this way preserves the degree of the polynomial xr and thus
giving us an integral domain (we earlier called this set R[x; σ]). We will further
complicate matters by taking xr = δ(r) + σ(r)x. This is the more general case of
σ-derivation.
Proof: Since it is true that x(r + s) = xr + xs then x(r + s) = δ(r + s) + σ(r + s)x.
Also, xr + xs = δ(r) + σ(r)x + δ(s) + σ(s)x = δ(r) + δ(s) + [σ(r) + σ(s)]x. Thus,
δ(r + s) = δ(r) + δ(s) and σ(r + s) = σ(r) + σ(s). So we get that both maps are
Thus we get that σ(rs) = σ(r)σ(s) and δ(rs) = δ(r)s + σ(r)δ(s). To show that σ is
50
injective suppose r 6= r0 ∈ R. Then
Definition 21: Let R be any ring and let σ, δ be as above. Then R[x; σ, δ] is called
Proposition 22: The degree of R[x; σ, δ] satisfies the following three conditions for
any domain R:
Proof: (i) and (ii) are trivial. To see that (iii) holds we note that in the previous
In order to prove the next theorem it is important to note that every principal right
every right ideal is finitely generated and thus satisfies the definition of noetherian.
Theorem 23: Let K be a division ring with σ, δ as defined above. Then K[x; σ, δ]
Proof: We will show that a K[x; σ, δ] has a division ring of fractions by showing
that it is P.R.I.D. which will give us that it is noetherian. This will prove the
51
theorem as earlier we proved that noetherian implies ore domain. To show that
K[x; σ, δ] is a P.R.I.D. we use the division algorithm. Let P = K[x; σ, δ] and let
deg f < deg g let q = 0, r = f and we are done. Otherwise, deg f ≥ deg g. In this
then gq1 has the same leading term as f and so deg(f − gq1 ) ≤ deg f . If
deg(f − gq1 ) ≥ deg g we can continue this process and after at most r − s steps we
obtain an r ∈ P such that deg r < deg g. Thus the claim is proven and we return to
We will refer to the division ring of fractions of K[x; σ, δ] as the Skew Function
Definition 24: A regular right Ore set is a multiplicative set subset of a ring,
The following theorem is a generalization of Theorem 12. The proofs are very
Theorem 25: Let R be any ring and let S be a regular right Ore set. Then R can
52
f : R −→ K is the embedding, then every element of K has the form f (a)f −1 (b),
where a ∈ R, b ∈ S.
Definition 26: The ring K constructed in the previous theorem is called the
Proposition 27: Let R be any ring, let S be a right regular Ore set in R, and let
denominator.
Proof: Let a/b, a0 /b0 ∈ RS . In Theorem 12 we showed that these two elements
a/b = au/m, a0 /b0 = a0 u0 /m. In the general case we use induction on the number of
elements. So that if we are given a1 /b1 , ..., an /bn then we can bring a2 /b2 , ..., an /bn
to a common denominator, say a2 u2 /m, ...an un /m. By the Ore condition there
the proof.
Theorem 28: Let R be a right Ore domain with an injective endomorphism σ and
assuming that R is an Ore domain then it has a division ring of fractions, call it K.
53
We define σ on K by
u, u0 ∈ R× 3: au = a0 u0 , bu = b0 u0 6= 0. We get that
σ(ab−1 ) = σ(au(bu)−1 )
= σ(au)σ −1 (bu)
= σ(a0 u0 )σ −1 (b0 u0 )
= σ(a0 b0−1 )
that σ is well-defined we can wee that δ is well-defined. Thus we can now form the
skew polynomial ring K[x; σ, δ]. K[x; σ, δ] has a division ring of fraction K(x; σ, δ).
54
f1 /w
f1 , g1 ∈ R[x; σ, δ]. Thus we get that h = f /g = g1 /w
= f1 /g1 . Thus, every element of
Domain. Note that we also showed that the division ring of fractions for R[x; σ, δ] is
in fact K(x; σ, δ)
ideals.
l(s) = {x ∈ R : xs = 0, ∀s ∈ S}.
r(s) = {x ∈ R : sx = 0, ∀s ∈ S}
Definition 33: A left ideal I of R is said to be essential if for all non-zero ideals
Lemma 34: Let R be a semiprime ring satisfying the A.C.C on left annihilators. If
A ⊃ B are left ideals of R and r(A) 6= r(B), then there is an a ∈ A such that
55
Proof: It is easy to see that if A ⊃ B ⇒ r(A) ( r(B), since r(A) 6= r(B). Let U be
x ∈ A such that xua 6= 0 with xua ∈ Aua ∩ B. Since x ∈ A then r(x) ⊃ r(A).
Now consider r(x) ∩ U . This is clearly a right annihilator containing r(A) and itself
is contained in U . Since uaU ⊂ r(x) and uaU * r(A) then r(A) is properly
We now proceed with two corollaries of the above lemma that are required to prove
Goldies theorem.
Corollary 35: Let R be as in the lemma above and let Rx and Ry be essential left
By the lemma above we get that there is a non-zero left ideal T ⊂ A such that
T ∩ l(y) = (0).
This implies that T xy 6= (0). Notice that since T ⊂ A then for all t ∈ T, ty ∈ A.
56
Now T xy = {rxy : r ∈ R, rx ∈ T }. rx ∈ T implies that rxy ∈ A which gives us that
T xy ⊂ A.
regular.
Proof: Since Ra is essential then Ra ∩ R 6= (0). Coupled with the lemma we get
We now wish to show that l(a) = 0. Combined, l(a) = 0 and r(a) = 0 will give us
that a is regular.
By that A.C.C. on left annihilators there is an integer n such that l(an ) = l(an+1 ).
Thus, we get that yan+1 = 0 which implies that y ∈ l(an+1 ) = l(an ). Hence,
x = yan = 0. Since x was chosen arbitrarily then Ran ∩ l(a) = (0). But Ran is
essential (by previous corollary applied inductively n-times). Thus we must have
For the remainder of the section we will assume that R is a semiprime Goldie Ring.
annihilators. Thus, r(Li ) 6= r(Li+1 ). Applying the above lemma we get that there
exist non-zero left ideals Cn of R such that Cn ⊂ Ln and Cn ∩ Ln+1 = (0). The Cn
57
form a direct sum of left ideals. Since R is Goldie this sum is finite and hence the
Proof: Let A be a non-zero left ideal of R such that A ∩ Rc = (0). Since l(c) = 0
If a0 + a1 c + · · · + an cn = 0, ai ∈ A, then a0 ∈ A ∩ Rc ⇒ a = 0 because
a1 + · · · + an cn−1 = 0. Now repeat the first argument to get that ai = 0, for all
i = 1, n.
Since R is goldie then we do not have any infinite direct sums and thus
Note that if S is an annihilator ideal for some ideal T then ST = (0). Thus,
moreover there is a finite direct sum of such ideals which is an essential left ideal of
R.
Proof: Let S be a non-zero minimal annihilator ideal. Let T 6= (0) is a left ideal of
58
that S has no infinite direct sums of left ideals. Since R satisfies the A.C.C. on left
annihilators then so does S, which makes S a Goldie ring. Now suppose that A, B
are ideals of S such that AB = (0). From SB ⊂ B we have that ASB = (0) which
that A is essential.
we can find a non-zero minimal annihilator ideal which fails to intersect A. In this
case we can lengthen the direct sum if A to include this ideal. However, this new
Proof: Case 1: R is a prime ring. Let a ∈ I such that l(a) is a minimal ideal. Such
a choice can be made by lemma 44. If a is regular we are done. Thus, suppose a is
not regular. By corollary 43 Ra ∩ J = (0) for some non-zero left ideal J of R. Since
Now in a prime ring l(a)J = (0) coupled with J 6= (0) forces us to have that
59
Ln
Case 2: R is a semiprime ring. Let A = i=1 Si be a maximal direct sum of
minimal annihilator ideals. Since Si are prime and since I ∩ Si are essential in Si
Theorem 42: Let R be a semiprime left Goldie ring. Then R has a left quotient
ring Q = Q(R).
Proof: Let a, b ∈ R such that a is regular. This gives us that l(a) = r(a) = (0),
xb ∈ Ra. Thus, since Ra is essential we must have that M is essential. Hence, there
60
Thus far we have shown that these semiprime Goldie rings have a quotient ring.
The second of Goldies theorems explicitly defines the structure of these rings. Since
the following theorem is not necessary in this section we state it without proof.
61
Chapter 3: Simple Rings
The Wedderburn-Artin Theorem has shown us why simple rings are so important
and so we will shift our focus to these rings and, amongst other interesting facts,
In this section we pick up where we left off at the end of chapter 2 section 1. We
will give conditions that give us simple rings from R((x; σ)).
Definition 1: A Ring R is simple if R 6= 0 and if the only two sided ideals of R are
(0) and R.
n
P
Proof: If a is non-zero then the set RaR = { ri asi : ri , si ∈ R} is a non-zero two
i=1
sided ideal or R. By the simplicity of R we get that R = RaR. Thus, there exists
n
P
ri , si ∈ R such that ri asi = 1. Now suppose that for all r ∈ R, ar = sa, some
i=1
s ∈ R. We show that aR is a two sided ideal (it will follow that Ra is a two sided
rx ∈ R. Thus, since aR, Ra are both non-zero two sided ideals then we must have
a1 a = aa2 = 1. So a is invertible.
62
Proof: Suppose I is a nonzero two-sided ideal of R((x; σ)). For any nonzero
p(x) ∈ I, there exists n ≥ 0 such that h(x) = p(x)xn ∈ R[[x; σ]]. In fact,
m
X
g(x) = ri h(x)si = 1
i=1
= 1 + b1 x + b2 x 2 + · · ·
for some bj ∈ R. Thus g(x) is an invertible element in I which implies I = R((x; σ))
We now try to understand the center of the skew-Laurent series ring with coefficients
from a non-commutative ring, as we did before for Division Rings. However, before
we can try and formalize any properties of the center of these rings we must first
inn(u) = inn(u)(r)
Lemma 5: Let σ ∈ Aut(R) such that σ k = inn(u) for some k ∈ N. Then there
63
Proof: For any k ≥ 1 and for any i ≥ 0 we get that
σ k = (σ i ◦ σ k ◦ σ −i )(r)
= σ i (inn(u)(σ −i (r)))
= σ i (u−1 σ −i (r)u)
= σ −i (u)rσ i (u)
= inn(σ 1 (u))
Thus,
2
σ k = inn(σ k−1 (u))inn(σ k−2 (u)) · · · inn(σ(u))inn(u)
Where the last equality comes from repeating the remark above k times. Let
= u−1 au
= σ k (u) = u
Theorem 6: Let R be a simple ring with center F and let C denote the center of
64
(1)Fσ , if σ k is not inner for all k ∈ Z
(2)Fσ ((uxn )), where n is the least positive integer for which σ n is an inner auto determined by a
σ invariant element u
∞
si xi ∈ C. As in the proof of Theorem 6 from Chapter 2, if
P
Proof: Let p =
i=m
If bp = pb, then (again in proof of Theorem 6 from Chapter 2) bsi = si σ i (b). Since
Clearly Fσ ⊂ C. To see that C ⊂ Fσ note that since σ k is not inner for all k ∈ Z
Case 2: Choose n to be the smallest positive integer such that σ n = inn(u) for some
s−1
i rsi = u
−ti
ruti ⇒ si u−ti r = rsi u−ti ⇒ si u−ti ∈ F . Hence, for some
λi u−ti xnti = λi (u−1 xn )ti ∈ Fσ ((uxn )). From here it is clear that
P P
p=
Fσ ((uxn )) ⊂ C
We now give another skew construction that produces simple rings using
automorphisms. We will invert the indeterminate x, but the elements in the ring
65
Definition 7: Let R be a ring and let σ ∈ Aut(R). A Laurent polynomial is a
n
ri xi , where k, n ∈ Z.
P
polynomial expression of the form
i=k
xr = σ(r)x
commutation rule to the negative powers of the indeterminate. The set of all that
Laurent polynomials under the usual addition and this skew multiplication will be
Polynomials. Note that we may call R[x, x−1 ; σ] a ring as we have proved that a
Theorem 8: Let R be a simple ring, Let C be the center of R[x, x−1 ; σ] and F the
center of R. Then C =
(2) Fσ [uxn , ux−n ], where n is the least positive integer such that σ n is an inner
automorphism determined by a σ-variant element u.Fσ [uxn , ux−n ] is the set of all
Theorem 9: Let R be a simple ring and I a non-zero ideal of R[x, x−1 ; σ]. Then I
Proof: Let R[x; σ] be set of all polynomials over R with the skew multiplication
66
defined above. Let I be a non-zero ideal of R[x, x−1 ; σ] and let Ix = I ∩ R[x; σ].
must have degree less than g. To see this note that rg = rxm + · · · and
with xn then ai xi = λi usk +ti (xn )ti xk = usk λi (uxn )ti xk . Thus,
m
!
X
g = usk λi (uxn )ti xk = vxk h,
i=k
where v is a unit andh is in the center of R[x, x−1 ; σ]. Since vxk is invertible in
67
Corollary 10: If R is a simple ring and σ ∈ Aut(R) such that no positive power of
Proof: In the above proof we showed that is if R is a simple ring and σ ∈ Aut(R)
We now give a construction of a simple ring not unlike our construction of a division
ring given that the ring was an Ore Domain. It turns out we don’t need any strong
requirement like an Ore domain but instead we will require that the center of our
ring not be trivial. Now let R× be all the non-zero elements of R and let S be the
Transitivity: Let (r, s) ∼ (r0 , s0 ) and (r0 , s0 ) ∼ (r00 , s00 ) ⇒ rs0 = r0 s and r0 s00 = r00 s0 .
68
Then
rs0 = r0 s ⇔
rs00 s0 = r0 s00 s ⇔
rs00 s0 = r00 s0 s ⇔
rs00 = r00 s
We will denote the equivalnce class of (r, s) by rs−1 and we call the set of all these
addition by,
We note that the additive identity is given by 0 = 0 · 1−1 and the multiplicative
we will leave it out. We do however show that these operations are well-defined. To
see this let rs−1 = r0 s0−1 . We will show that for any ab−1 ∈ RS −1 ,
rs−1 + ab−1 = r0 s0−1 + ab−1 and that (rs−1 )(ab−1 ) = (r0 s0−1 )(ab−1 ).
69
First, rs−1 = r0 s0−1 ⇒ rs0 = r0 s, thus
= r0 s0−1 + ab−1
This procedure for forming RS −1 from R and a multiplicative subset of the center of
70
(3) Every ideal of RS −1 is of the form IS −1 , where I is an ideal of R.
simple ring.
homomorphism. Thus (1) is proven. For the second statement let s ∈ S and
consider (1, s) · (s, 1) = (1 · s−1 )(s · 1−1 ) = (1 · s)(s · 1)−1 = 1. Now let J be an ideal
Proof: Let eij be a matrix with a 1 in the (i, j) entry and 0 everywhere else and let
eis ,
if j = s
eij ers =
0,
otherwise
aij 0 ··· ··· 0
0 ··· ··· ··· 0
e1i Aej1 =
0 ··· ··· ··· 0
0 0 ··· ··· 0
Where aij is the ij-th element of A.
71
Since F is a field and aij ∈ F then there must exist (aij )−1 ∈ F such that
Let A0 be the matrix equal to A everywhere except we let the ij-th entry be (aij )−1 .
Since e1i A0 ej1 ∈ Mn (F ) then the product of e1i Aej1 and e1i A0 ej1 ∈ Mn (F ) is an
To see that e22 , ..., enn are all elements of I note that ekk is the product of eki Aejk
and eki A0 ejk for all 1 ≤ k ≤ n. Where eki Aejk ∈ I for all 1 ≤ k ≤ n and
We will now prove the converse of Wedderburn-Artin. Notice that the above proof
gives us that Mn (D) is simple, for any division ring D. We must just show that it is
also Artinian. To do this we prove a few lemmas. Note that all the proofs are in
terms of either right ideals or left ideals but the proof of the opposite side is exactly
the same.
Lemma 13: Let R = Mn (D), for any division ring D. For each i = 1, 2, ..., n, eii R
72
proof: Clearly eii R is a right ideal. Let A be a non-zero right ideal of R such that
A ⊂ eii R. Let 0 6= x = xi1 ei1 + · · · + xin ein ∈ A. If xij 6= 0 for some j then
xeij x−1
ij eji = eii ∈ A
Lemma 14: Suppose R is a ring which can be written as the finite direct sum of
minimal right ideals. Then every right ideal of R is of the form eR, for some
idempotent e ∈ R.
Lk
Proof: Assume R = i=1 Mi , where each Mi is a minimal right ideal. Let A be a
right ideal of R. If A = (0) or A = R then the result is true. Thus suppose that A is
can assume that M2 is not contained in A1 . This gives us that A1 ∩ M2 = (0). And
A2 = A1 ⊕ M2 = A ⊕ M1 ⊕ M2
since e ∈ A, A = eR.
73
Thus, any ring which has the property that it can be written as the finite direct sum
of minimal right ideals must be ring Noetherian (by the finitely generated property
of Noetherian rings).
Lemma 15: Every right ideal of R = Mn (D) is of the form eR, for some
idempotent e ∈ R.
hence, the result from lemma 14. Similarly for left ideals. In particular Mn (D) is
Recall that for the left annihilator of a ring R, l(R) if S ⊂ T then l(S) ⊃ l(T ) and
Lemma 16: If R is any ring and e ∈ R is idempotent, then r(Re) = (1 − e)R and
x = (1 − e)x + ex = (1 − e)x ⇒
Theorem 17: If D is a division ring then Dn = Mn (D) is both left and right
Artinian.
74
Proof: By lemma 15 a random descending chain of left ideals of R is of the form:
Then
some index m such that r(Rem ) = r(Rek ), for all k ≥ m. By the above lemma we
have that Rem = l(r(Rem )) = l(r(Rem+1 ) = Rem+1 = · · · . A similar proof works for
right Artinian.
Definition 18: Let R be any ring. The smallest integer n such that n1 = 0 is called
If R is the ring of all C ∞ real valued functions on the real line, then the classical
d
derivative, , is a derivation on R.
dx
75
Definition 22: An ideal I ⊂ R is called a δ-ideal if δ(I) ⊂ I
Definition 23: The ring R is called δ-simple if the only δ-ideals of R are (0) and
R.
Theorem 24: Let K be a ring with Char(K) = 0 and let δ be a derivation. Then
Proof: First we assume that δ is inner. Then, for some c ∈ K, δ(b) = cb − bc, for all
ai xi ∈ R : ai ∈ U } = U R. This
P
Now assume U 6= (0) is a δ-ideal of K. Let I = {
is clearly an ideal of R.
Conversely, assume K is δ-simple but that R is not simple. Let I 6= (0) be an ideal
Let n be the minimum degree for the non-zero polynomials in I and let
f x = axn+1 + ai−1 xi + · · ·
and
76
so that
g = xn + dxn−1 + · · · ∈ I ⇒ 1 ∈ U .
x2 b = x(xb)
= x(bx + δ(b)
= xbx + xδ(b)
xn b = bxn + nδ(b)xn−1 + · · ·
bg = bxn + bdxn−1 + · · ·
and
gb = xn b + dxn−1 b + · · ·
so that
bg − gb = (bd − db − nδ(b))xn−1 + · · ·
77
since bg − gb ∈ I.
bg − gb = (bd − db − nδ(b))xn−1 + · · · = 0
⇒ bd − db = nδ(b) ⇒
d d
δ(b) = b − b
n n
Thus, δ is inner.
Now let R = K < xi > and let F = {fj } ⊂ R. (F ) is the ideal generated by F in R.
Thus we may talk about R = R/(F ), the ring generated over K by {xi } with
relations F .
The n-th Weyl algebra, An (K) is generated by {x1 , yi }, each commuting with
(1)xi yi − yi xi = 1; 1 ≤ i ≤ n
(2)xi yj − yj xi = 0; i 6= j
(3)xi xj − xj xi = 0; i 6= j
(4)yi yj − yj yi = 0; i 6= j
78
(1) A1 (K) = K[y][x; δ] is another identification for A1 (K), and
Proof: Let R be a simple ring with center C. Now suppose that C is not a field.
Then for some non-zero element a ∈ C there does not exist a non-zero element b
such that ab = 1. This gives us that the ideal generated by a, (a), does not contain
the identity. This means that R strictly contains non-zero ideals and is therefore not
simple.
Lemma 27: Let K be a simple ring of characteristic 0. Then for any non-inner
Proof: That R is simple follows from theorem 19. Since the center of K is a field
Rx ) Rx2 ) · · ·
Theorem 28: Let K be any simple ring of characteristic zero. Then An (K), n ≥ 1
the lemma that A1 (K) is nonartinian. To show that A1 is simple we use the
79
polynomials in U , then
U = K[y].
80
Bibliography
2004.
3 Procesi, Claudio, Rings With Polynomial Identities, Marcel Dekkner Inc., New
York, 1973.
York, 1991.
8 Dummit, David, and Foote, Richard, Abstract Algebra, Third Edition, John
81