The Basic Facts About Rings: Download

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

WEEK 01

Introduction
The study of polynomial equations has been perhaps the most important problem in the evolution of algebra, and it is also the focus of this course. Can we nd procedures for solvingor at least gaining some information about the solutions ofsystems of simultaneous equations involving addition and multiplication of several variables? Consider, for example, the system 2 x2 4 xyz 3 z5 y2 = 17, x3 + y3 + z3 = 100. What can be said about triples ( x, y, z) of numbers satisfying these equations? Such problems are very difcult, and these lectures are not going to tell you how to solve them! They will, however, introduce you to the algebraic concepts that have been developed to help analyse this kind of problem. The concepts you will meet in this course are the end result of centuries of mathematical renement. We do not have enough time to review the processes that shaped the theory; instead the theory will be presented in its current rened form. This provides the shortest route to our goal, although it does have the unfortunate consequence that the reasons for the various denitions are initially not always clear. More time is spent on the abstract concepts than on the problems they were invented solve. But this is the most efcient way to gain an understanding of the subject. The concepts may seem strange or difcult at rst, but they become familiar and easy with time, and you see, retrospectively, why they are useful. These lecture summaries will make frequent reference to the notes Rings, Fields and an Introduction to Galois Theory (which you are advised to either buy or download), hereinafter referred to as [RFGT].

The basic facts about rings


See Section 4 of [RFGT], starting on p. 8, for the rst denitions and elementary propositions of ring theory. Note that the two halves of Axiom (vi)the two distributive laws, a(b + c) = ab + ac and (b + c) a = ba + ca for all a, b and c in a ringare not consequences of each other. We shall later meet an example of a so-called near-ring, an algebraic system satisfying all of the ring axioms except for one of the distributive laws. Exercises 1 and 2 on p. 9 of [RFGT] were done in the lectures. Proposition 1. Let R be any ring and a R. The equation a + z = a has a unique solution in R; the solution is z = 0, the zero element of R. Proof. That z = 0 is a solution follows from Axiom (iii) (in Denition (4.1) of [RFGT]). Conversely, suppose that z R satises a + z = a. By Axiom (iv) there is a b R satisfying b + a = 0, and so, using Axioms (iii), (ii) and (i), z = z + 0 = 0 + z = (b + a) + z = b + ( a + z) = b + a = 0, as required.

For a good overview of modern algebra, see the Encyclopedia Britannica articles Algebra, elementary and multivariate and Algebraic structures.
1

It is a trivial consequence of this proposition that the zero element of R is unique. For, if z is another zero element, meaning that a + z = a for all a R, then, in particular, 0 + z = 0. By Proposition 1, with a replaced by 0, it follows that z = 0. Note, however, that uniqueness of the zero can be proved even more easily than this: see the comment following (iii) of Denition (4.1) of [RFGT]. See also the equally easy proof, in the comment following (iv), that the negative of a given ring element is unique. Proposition 2. Let R be any ring and a R. Then a0 = 0 a = 0 . Proof. Since 0 + 0 = 0 (Axiom (iii)) we have that (0 + 0) a = 0 a. But (0 + 0) a = 0 a + 0 a by one of the distributive laws, and so 0 a + 0 a = 0 a. Now Proposition 1, applied with 0 a in place of a and 0 a in place of z, tells us that 0 a = 0. A similar proof gives a0 = 0. As explained on p. 9 of [RFGT], if R is a ring with an identity element and it happens that the identity element coincides with the zero element, then R has no other elements at all (since for all a R we would have a = 1 a = 0 a = 0). We shall almost always want to exclude this trivial case; so be warned that, in future, whenever I say Let R be a ring with 1, I probably mean Let R be a ring with 1 such that 1 = 0. I believe that I have already fallen into this trap twice in lectures, omitting the important requirement that 1 = 0 from the denitions I gave for the terms integral domain and eld. The trivial one-element ring {0} is not considered to be either an integral domain or a eld. It is easily proved that elds have no zero divisors; hence every eld is also an integral domain. (The proof is given in [RFGT].) Some obvious examples The following notation will be followed throughout the course: C is the set of all complex numbers, R is the set of all real numbers, Z is the set of all integers. We also dene Q = { p/q | p, q Z and q = 0 }, the set of all rational numbers. It is assumed that you are familiar with the basic properties of the standard operations of addition and multiplication of numbers. In particular, it is clear that C, R and Q are elds, and Z is an integral domain. If addition and multiplication of matrices are dened in the usual way then the set of all n n matrices over a ring R becomes a ring (for any xed positive integer n). We shall denote this ring by Mn ( R). The proofs that the standard properties of matrix algebraassociative and distibutive laws, and the likehold for matrices over any ring are just the same as the proofs for matrices over R, and so we omit them. Note that R is not required to be commutative. If R has a 1 then Mn ( R) also has a 1: in Mn ( R) the 1 is the n n identity matrix over R. Introduction to subrings Let R be any ring and S a subset of R. We say that S is closed under addition if the following condition holds for all a, b R: if a, b S then a + b S. We say

Indeed, I think that you should allow n = 0, in which case there should be exactly one n n matrix over R. But this is just a matter of convention, and some people may not agree with me!
2

that S is closed under multiplication if the following condition holds for all a, b R: if a, b S then ab S. And we say that S is closed with respect to negatives if the following condition holds for all a R: if a S then a S. We shall prove below that a nonempty subset S that satises these three closure properties is a ring with respect to the addition and mutiplication operations that it inherits from R. In these circumstances S is called a subring of R. For example, it can be seen that if n Z then def nZ = { nk | k Z }

(the set of all integers that are divisible by n) is a ring with respect to the usual addition and multiplication operations. Similarly, the set of all upper triangular n n matrices, over any ring R, is a ring. To prove this you just have to show that the sum and product of two upper triangular matrices is always upper triangular, and the negative of an upper triangular matrix is upper triangular. (The proof in the particular case that n = 3 and r = R is done on p. 15 of [RFGT]. The general case is little harder.) It is clear that a subring of a commutative ring is commutative, and reasonably clear also that a subring of a ring with no zero divisors has no zero divisors. (For this last point you need the fact that the zero element of any subring of a ring R coincides with the zero of R itself. We shall prove this in the course of proving the subring theorem.) In the lecture I made the mistake of asserting that any subring of a eld is an integral domain. If I had thought a little longer I would have made the mistake of asserting that any subring of an integral domain is an integral domain! Of course, since the denition requires that an integral domain has to have an identity element, the correct statement is this: any subring of an integral domain is an integral domain, provided that it has an identity element. In particular, since elds are integral domains, any subring of a eld is an integral domain, provided that it has an identity element. In view of the above discussion, it will be just as well to prove the following fact. Proposition. Let R be an integral domain and let S be a subring of R that is also an integral domain. Then the identity elements of S and R coincide. Proof. Let 1 R and 1 S be the identity elements of R and S respectively. Then a1 S = a for all a S, and a1 R = a for all a R. We can put a = 1 S in both these equations, since 1 S S and 1 S R, and conclude that 1 S 1 S = 1 S 1 R . Now
1 S (1 S 1 R ) = 1 S 1 S 1 S 1 R = 0, and since there are no zero divisors it follows that either 1 S = 0 or 1 S 1 R = 0. The former alternative is not possible since the 1 element of an integral domain must be nonzero; so 1 S 1 R = 0, whence 1 S = 1 R , as required. Having made such a song and dance about this I feel obliged to point out that some people prefer to replace the requirement that an integral domain must have an identity element by the weaker requirement that an integral domain must have at least one nonzero element. With this denitionwhich is certainly not the denition we use in this courseit would be that case that every subring of an integral domain, apart from the trivial subring {0}, is also an integral domain. 3

The reason behind this divergence of preferences will become apparent when we discuss the construction of the eld of quotients of an integral domain. It is a fact that given any integral domain one can construct a eld (called the eld of quotients) in a manner analogous to the way in which the eld of rational numbers is constructed from the integral domain Z. But the construction works equally well if one uses only the weaker denition of integral domain: a eld of quotients can be constructed from any nontrivial commutative ring with no zero divisors. (For example, Q is not only the eld of quotients of Z, it is also the eld of quotients of 2Z, since every rational number can be expressed in the form p/q with p and q being even integers.) The Subring Theorem. Let S be a subset of a ring R. Suppose that S is nonenempty, closed under addition, closed under multiplication, and closed with respect to negatives. Then there exist addition and multiplication operations on S that make S into a ring and are consistent with the addition and multiplication operations of R, in the sense that for all a, b S the sum a + b and the product ab take the same values whether the operations used are those of S or those of R. Proof. A binary operation on R is simply a function R R R. Thus Rs addition operation is a rule that assigns an element a + b R to each pair of elements a, b R. Closure of S under addition tells us that a + b S if a, b S, and hence yields a rule for obtaining an element of S from any given pair of elements of S. In the language of functions, the restriction to S S of the addition function R R R determines a function S S S. We dene this to be the addition operation for S. The multiplication operation for S is dened analogously. Since S = we can choose an element t S. By closure with respect to negatives we deduce that t S. By closure under addition (t) + t S. Hence 0, the zero element of R, is in S. Now if a, b and c are arbitrary elements of S, then a, b and c are also in R (since S R), and so by the fact that R is a ring the following equations all hold:

( a + b) + c = a + (b + c) a+b=b+a a+0=a ( ab)c = a(bc) a(b + c) = ab + ac ( a + b)c = ac + bc.


Since these hold for all a, b, c S, it follows that S satises axioms (i), (ii), (iii), (v) and (vi) of Denition (4.1). Axiom (iv) is also satised since for all a S the element b = a is in S (by closure with respect to negatives) and satises a + b = 0 (by Axiom (iv) for R). Hence S is a ring under the operations we have dened. Direct products If R1 and R2 are rings then the set R1 R2 = { ( a1 , a2 ) | a1 R1 , a2 R2 } becomes a ring if we dene

( a1 , a2 ) + (b1 , b2 ) = ( a1 + b1 , a2 + b2 ) ( a1 , a2 )(b1 , b2 ) = ( a1 b1 , a2 b2 )
4

for all a1 , b1 R1 and a2 , b2 R2 . There is no short way to prove this; we just have to check all the axioms. In each case the proof that the axiom in question is satised is a trivial consequence of the fact that R1 and R2 both satisfy that axiom. As an illustration, let us prove that Axiom (i) holds in R1 R2 . Let , and be arbitrary elements of R1 R2 . Then there exist a1 , b1 , c1 R1 and a2 , b2 , c2 R2 such that = ( a1 , a2 ), = (b1 , b2 ) and = (c1 , c2 ), and now

( + ) + = (( a1 , a2 ) + (b1 , b2 )) + (c1 , c2 ) = ( a1 + b1 , a2 + b2 ) + (c1 , c2 ) = (( a1 + b1 ) + c1 , ( a2 + b2 ) + c2 ) = ( a1 + (b1 + c1 ), a2 + (b2 + c2 )) = ( a1 , a2 ) + (b1 + c1 , b2 + c2 ) = ( a1 , a2 ) + ((b1 , b2 ) + (c1 , c2 )) = + ( + ).


The crucial step here occurs in the middle, where we use the fact that addition in R1 and addition in R2 are both associative. The other steps are simply applications of the denition of addition in R1 R2 . The proofs that Axioms (ii), (v) and (vi) are satised in R1 R2 are totally analogous to the above proof that Axiom (i) is satised. An analogous argument also shows that if 01 and 02 are the zero elements of R1 and R2 then the element 0 def = (01 , 02 ) R1 R2 has the property that + 0 = for all R1 R2 . Hence Axiom (iii) is satised. And if = ( a1 , a2 ) is an arbitrary element of R1 R2 then def the element = ( a1 , a2 ) R1 R2 has the property that + ( ) = 0, so that Axiom (iv) also holds. Exactly the same construction applies for any number of rings, not just two. Let I be any set, R a set of rings and R: I R an arbitrary function. Thus for each i I we are given a ring R(i ) in the set R . The direct product i I R(i ) is dened to be the set of all families ( ai )i I , indexed by the set I , such that ai R(i ) for each i I . Here the things that I call families are more or less the same as functions: a family indexed by I is a rule that assigns to each i I an object in some set Si determined by i . The indexing set plays the same role as the domain of a function; the difference is that instead of a single set S for the codomain, we may have different target sets for different values of i . In our situation above we can dene addition and multiplication of elements of i I R(i ) pointwise; that is,

( ai )i I + ( bi )i I = ( ai + bi )i I ( ai )i I ( bi )i I = ( ai bi )i I
for all elements ( ai )i I and (bi )i I of i I R(i ). The proof that these operations make i I R(i ) into a ring is totally analogous to the proof for R1 R2 . (This construction is also discussed on p. 18 of [RFGT]). In the case that all the rings R(i ) are equal to one anothersay R(i ) = R for all i I the direct product i I R(i ) becomes the set of all functions f : I R, with addition and multiplication of functions dened by
def

def

( f + g)(i) = f (i) + g(i) ( f g)(i) = f (i) g(i)


for all functions f and g from I to R. 5
def

def

Note that in linear algebra, and in many other parts of algebra, pointwise multiplication of functions (the kind of multiplication dened above) is not the most common kind of multiplication. More common is composition, where the product of f and g is dened by the rule ( f g)( x) = f ( g( x)). Fortunately, it is somewhat rare to encounter a situation in which both these kinds of multiplication make sense. Pointwise multiplication makes sense when the domains f and g are equal, the codomains of f and g are equal, and there is a multiplication operation dened on this codomain. Composition of f and g makes sense whenever the codomain of g is contained in the domain of f . In particular, if S is any set then the composite of functions f and g from S to S gives a function f g from S to S. If the set S is equipped with an addition operation but not a multiplication operation (as may happen, for example, if S is a vector space) then we can dene addition of functions S S pointwise (as above) and dene multiplication of functions to be composition. Question: Do these operations make the set of functions S S into a ring? We shall return to this question next week. Polynomials Let R be any ring and let P be the set of all innite sequences of elements of R, indexed by the nonnegative integers: P = { (r0 , r1 , r2 , . . .) | ri R for all integers i 0 }. This set becomes a ring if we dene addition of sequences pointwise and dene multiplication of sequences by the rule
def

(r0 , r1 , r2 , . . .)(s0 , s1 , s2 , . . .) = (r0 s0 , r0 s1 + r1 s0 , r0 s2 + r1 s1 + r2 s0 , . . .).


here the i th term in the product is dened to be ij=0 r j si j , for all i 0. It is a routine task to verify that the axioms are indeed satised. Students are very strongly urged, in particular, to perform the routine task of checking that the associative law for multiplication is satised, since to do this you need to be able to correctly interchange the order of summation in the double sum that arises (and this is an important technical skill). This denition of multiplication becomes less mysterious if you observe that the set P is in bijective correspondence with the set of all formal power series i i =0 ri x ( where here x is just some meaningless symbol ) . The rule for multiplication of formal power series ri xi and si xi is to multiply each term of the rst series by each term of the second, assuming that the symbol x commutes with all the coefcients si and that xi x j = xi+ j , and then collect like terms. It is easily veried that the coefcient of xi in the product is ij=0 r j si j . The symbol x is technically known as an indeterminate and the ring of formal power series over R in the indeterminate x is commonly denoted by R[[ x]]. i A formal power series i =1 ri x in which all but a nite number of the coefcients ri are zero is called a polynomial in the indeterminate x. It is not hard to check that the set of all polynomials in x is a subring of R[[ x]]. We use the notation R[ x] for the ring of polynomials over R in the indeterminate x. 6

You might also like