Lattices and Boolean Algebras

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 93

LATTICCESS AND BOOLEAN ALGEBRAS

E-content for M. A/M.sc and Integrated Bsc-Msc

Prepared by
Dr Aftab Hussain Shah
Assistant Professor
Department of Mathematics
Central University of Kashmir

MARCH 1, 2108
CENTRAL UNIVERSITY OF KASHMIR
Srinagar
Contents

Chapter-1: The Concept of an Order


Chapter-2: Introduction Lattices
Chapter-3: Modular Distributive and Complemented Lattices
Chapter-4: Boolean Algebras and their applications
Chapter 1
THE CONCEPT OF AN ORDER

The present chapter is aimed at to provide introductory definitions, examples and results on
ordered sets which shall be used in the subsequent chapters. This chapter consists of five
sections. Section 1.1 is on ordered sets and their examples. In section 1.2 we discuss Hasse
diagrams, a special type of diagrams used to represent ordered sets. Section 1.3 and 1.4 deals
with results and examples on order preserving and residuated mappings between the ordered
sets. The chapter ends with some important results and examples on isomorphism of ordered
sets.

1.1 Concept of an order

In this section we will go through the definition of partial order relation on a set, shortly read
as “order” and discuss partial ordered sets (ordered sets) in detail with the help of various
examples. The section ends with some of the important results on ordered sets.

Definition 1.1.1: A binary relation on a non-empty set 𝐸 is a subset 𝑅 of the Cartesian product
set𝐸 × 𝐸 = {(𝑥, 𝑦) | 𝑥, 𝑦 ∈ 𝐸}. For any (𝑥, 𝑦) ∈ 𝐸 × 𝐸, (𝑥, 𝑦) ∈ 𝑅 means that 𝑥 is related to 𝑦
under 𝑅 and we denote it by 𝑥𝑅𝑦.

Definition 1.1.2: The Equivalence Relations on 𝐸 is a binary relation 𝑅 that is:


(a) Reflexive [for all𝑥 ∈ 𝐸 (𝑥, 𝑥) ∈ 𝑅];
(b) Symmetric [for all 𝑥, 𝑦 ∈ 𝐸 if (𝑥, 𝑦) ∈ 𝑅 then (𝑦, 𝑥) ∈ 𝑅];
(c) Transitive [for all 𝑥, 𝑦, 𝑧 ∈ 𝐸if (𝑥, 𝑦) ∈ 𝑅 and (𝑦, 𝑧) ∈ 𝑅then (𝑥, 𝑧) ∈ 𝑅].
For any relation 𝑅 on 𝐸 the dual of 𝑅 denoted by 𝑅 𝑑 is defined by:
(𝑥, 𝑦) ∈ 𝑅 𝑑 if and only if (𝑦, 𝑥) ∈ 𝑅.
One can easily see that if 𝑅 is symmetric then 𝑅 = 𝑅 𝑑 .
Here we shall be particularly interested in the situation where property (b) is replaced by the
following property:
(d) Antisymmetry[for all 𝑥, 𝑦 ∈ 𝐸 if (𝑥, 𝑦) ∈ 𝑅 and (𝑦, 𝑥) ∈ 𝑅 then 𝑥 = 𝑦].
One can easily verify that if 𝑅is antisymmetric then 𝑅 ∩ 𝑅 𝑑 = 𝑖𝑑𝐸 . Where 𝑖𝑑𝐸 denotes the
equality relation on 𝐸as:
(𝑥, 𝑦) ∈ 𝑅 ∩ 𝑅 𝑑 if and only if (𝑥, 𝑦) ∈ 𝑅 and (𝑥, 𝑦) ∈ 𝑅 𝑑 ;
if and only if (𝑥, 𝑦) ∈ 𝑅 and (𝑦, 𝑥) ∈ 𝑅;
if and only if 𝑥 = 𝑦; (since 𝑅 is antisymmetric)
if and only if (𝑥, 𝑦) ∈ 𝑖𝑑𝐸 ;
if and only if 𝑅 ∩ 𝑅 𝑑 = 𝑖𝑑𝐸 .
Notation: We usually denote 𝑅 by the symbol ≤ and write the expression (𝑥, 𝑦) ∈ ≤ in the
equivalent form 𝑥 ≤ 𝑦. Which we read as “𝑥 is less than or equal to 𝑦”. With this notation we
now define an order on a set 𝐸.

Definition 1.1.3: We say ≤ is an order on 𝐸 if and only if:


(a) Reflexivity: [For all𝑥 ∈ 𝐸 𝑥 ≤ 𝑥.]
(b) Antisymmetry: [For all𝑥, 𝑦 ∈ 𝐸 if 𝑥 ≤ 𝑦 and 𝑦 ≤ 𝑥 then 𝑥 = 𝑦.]
(c) Transitivity:[For all 𝑥, 𝑦, 𝑧 ∈ 𝐸 if 𝑥 ≤ 𝑦 and 𝑦 ≤ 𝑧 then 𝑥 ≤ 𝑧.]

Definition 1.1.4: By an ordered set we shall mean a set 𝐸 on which there is defined an order ≤
and we denote it by (𝐸; ≤).
Other common terminology for an order is a partial order and for an ordered set is a partially
ordered set or a poset.

Example 1.1.5: On every set 𝐸 the relation of equality is an order.

Solution: Let 𝐸 be any arbitrary set.


Suppose for all 𝑥, 𝑦 ∈ 𝐸 𝑥 ≤ 𝑦 if and only if 𝑥 = 𝑦.
Reflexivity: For all 𝑥 ∈ 𝐸 we have 𝑥 = 𝑥. So𝑥 ≤ 𝑥 for all 𝑥 ∈ 𝐸.
Antisymmetry: For any 𝑥, 𝑦 ∈ 𝐸 suppose 𝑥 ≤ 𝑦 and 𝑦 ≤ 𝑥 if and only if 𝑥 = 𝑦 (by
definitionof ≤). Thus ≤ is antisymmetric.
Transitivity: For any 𝑥, 𝑦, 𝑧 ∈ 𝐸 let 𝑥 ≤ 𝑦 and 𝑦 ≤ 𝑧
if and only if 𝑥 = 𝑦 and 𝑦 = 𝑧
if and only if 𝑥 = 𝑧
if and only if 𝑥 ≤ 𝑧.
Which proves that ≤ is transitive.

Example 1.1.6: On the set ℙ(𝐸) of all subsets of a non-empty set 𝐸, the relation ⊆ of set
inclusion defined by 𝐴 ≤ 𝐵 if and only if 𝐴 ⊆ B is an order.

Solution: Reflexivity: For any 𝐴 ∈ ℙ(𝐸), since 𝐴 ⊆ A so 𝐴 ≤ 𝐴. Therefore ‘⊆’ is reflexive.


Antisymmetry: For any 𝐴, 𝐵 ∈ 𝑃(𝐸) let𝐴 ≤ 𝐵 and 𝐵 ≤ 𝐴 implies 𝐴 ⊆ B and 𝐵 ⊆ A; this
gives 𝐴 = 𝐵. Therefore ⊆ is antisymmetric.
Transitivity: For any 𝐴, 𝐵, 𝐶 ∈ 𝑃(𝐸) let 𝐴 ≤ 𝐵 and 𝐵 ≤ 𝐶, which implies 𝐴 ⊆ B and 𝐵 ⊆ C;
this gives 𝐴 ⊆ 𝐶; therefore 𝐴 ≤ 𝐶 and thus ⊆ is transitive. Hence ⊆ defines an order on ℙ(𝐸).

Example 1.1.7: On the set ℕ of natural numbers the relation | of divisibility, defined by 𝑚 ≤
𝑛 if and only if 𝑚|𝑛, is an order.

Solution: To show that ℕ with respect to | forms an order set we must show that it satisfies
reflexivity, antisymmetry and transitivity.
Reflexivity: For any 𝑚 ∈ ℕ, we have 𝑚 = 1 ∙ 𝑚, so 𝑚|𝑚, thus 𝑚 ≤ 𝑚 and hence ≤ is
reflexive
Antisymmetry: For any 𝑚, 𝑛 ∈ ℕ suppose 𝑚 ≤ 𝑛 and 𝑛 ≤ 𝑚 if and only if 𝑚|𝑛 and 𝑛|𝑚 if
and only if 𝑚 = 𝑛. Thus ≤ is antisymmetric.
Transitivity: For 𝑚, 𝑛, 𝑝 ∈ ℕ suppose 𝑚 ≤ 𝑛 and 𝑛 ≤ 𝑝 if and only if 𝑚|𝑛 and 𝑛|𝑝 if and only
if 𝑚|𝑝 i.e, 𝑚 ≤ 𝑛. Thus ≤ is transitive. So| is an order on ℕ.

Example 1.1.8: If (𝑃; ≤ ) is an ordered set and 𝑄 is a subset of 𝑃 then the relation≤𝑄 defined
on 𝑄 by;
𝑥 ≤𝑄 𝑦 if and only if 𝑥 ≤ 𝑦 is an order on 𝑄.

Solution: We need to show that the relation ≤𝑄 defined on 𝑄 is an order.


Reflexivity: For any 𝑥 ∈ 𝑄, we have 𝑥 ∈ 𝑃(𝑄 ⊆ 𝑃), since𝑃 is an ordered set, therefore we
have 𝑥 ≤ 𝑥 if and only if 𝑥 ≤𝑄 𝑥. Thus ≤𝑄 is reflexive on 𝑄.
Antisymmetry: Let 𝑥, 𝑦 ∈ 𝑄 be such that 𝑥 ≤𝑄 𝑦 and 𝑦 ≤𝑄 𝑥;
if and only if 𝑥 ≤ 𝑦and 𝑦 ≤ 𝑥
if and only if 𝑥 = 𝑦 (since ≤ is an order on 𝑃).
This proves that ≤𝑄 is antisymmetric.
Transitivity: Suppose for any 𝑥, 𝑦, 𝑧 ∈ 𝑄; 𝑥 ≤𝑄 𝑦 and 𝑦 ≤𝑄 𝑧;
if and only if 𝑥 ≤ 𝑦 and 𝑦 ≤ 𝑧
if and only if 𝑥 ≤ 𝑧 (since ≤ is an order on 𝑃)
if and only if 𝑥 ≤𝑄 𝑧.
This proves that ≤𝑄 is transitive.
We often write ≤𝑄 simply as ≤ and say 𝑄 inherits the order ≤ from 𝑃.
Thus for example, the set 𝐸𝑞𝑢 𝐸 of all equivalence relations on 𝐸 inherits the order ⊆ from
𝑃(𝐸 × 𝐸).
Example 1.1.9: The set of even positive integers may be ordered in a usual way or by
divisibility.

Example 1.1.10: If (𝐸1 ; ≤1 ), (𝐸2 ; ≤2 ), . . . , (𝐸𝑛 ; ≤𝑛 ) are ordered sets then the Cartesian
product set×𝑛𝑖=1 𝐸 i can be given the Cartesian order ≤ defined by: for any (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ),
(𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ) ∈ ×𝑛𝑖=1 𝐸𝑖 ;
(𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) ≤ (𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ) if and only if 𝑥𝑖 ≤𝑖 𝑦𝑖 for all 𝑖 = 1,2, … , 𝑛.

Solution: Reflexivity: Let (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) ∈×𝑛𝑖=1 𝐸𝑖 where 𝑥𝑖 ∈ 𝐸𝑖 for all 𝑖 = 1,2, … , 𝑛. Since
(𝐸1 ; ≤1 ) , (𝐸2 ; ≤2 ), . . . , (𝐸𝑛 ; ≤𝑛 ) are ordered sets then by reflexivity of each
(𝐸𝑖 ; ≤𝑖 ) 𝑥𝑖 ≤𝑖 𝑥𝑖 for all 𝑖 = 1,2, … , 𝑛 if and only if (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) ≤ (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) (by
definition of ≤). This Shows that ≤ is reflexive.
Antisymmetry: Let (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ), (𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ) ∈ ×𝑛𝑖=1 𝐸𝑖 where 𝑥𝑖 ∈ 𝐸𝑖 and 𝑦𝑖 ∈ 𝐸𝑖 for
all 𝑖 = 1,2, … , 𝑛. Suppose (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) ≤ (𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ) and (𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ) ≤
(𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) if and only if 𝑥𝑖 ≤𝑖 𝑦𝑖 and 𝑦𝑖 ≤𝑖 𝑥𝑖 for all 𝑖 = 1,2, … , 𝑛 if and only if 𝑥𝑖 = 𝑦𝑖
for all 𝑖 = 1,2, … , 𝑛 (by antisymmetry of each (𝐸𝑖 ; ≤𝑖) ) if and only if (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) =
(𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ). Which showsthat ≤ is antisymmetric.
Transitivity: For any (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ), (𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ) and (𝑧1 , 𝑧2 , . . . , 𝑧𝑛 ) ∈×𝑛𝑖=1 𝐸𝑖 if
(𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) ≤ (𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ) and (𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ) ≤ (𝑧1 , 𝑧2 , . . . , 𝑧𝑛 ) if and only if
𝑥𝑖 ≤𝑖 𝑦𝑖 and 𝑦𝑖 ≤𝑖 𝑧𝑖 for all 𝑖 = 1,2, … , 𝑛 if and only if 𝑥𝑖 ≤𝑖 𝑧𝑖 for all 𝑖 = 1,2, … , 𝑛 (by
transitivity of each (𝐸𝑖 ; ≤𝑖 )) if and only if (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) ≤ (𝑧1 , 𝑧2 , . . . , 𝑧𝑛 ), proving that ≤ is
transitive.

Definition 1.1.11: The order defined in above is called the Cartesian order.

Example 1.1.12: Let 𝐸 and 𝐹 be ordered sets, then the set 𝑀𝑎𝑝(𝐸, 𝐹) of all mappings 𝑓: 𝐸 →
𝐹 can be ordered by defining 𝑓 ≤ 𝑔 if and only if 𝑓(𝑥) ≤ 𝑔(𝑥) for all 𝑥 ∈ 𝐸.

Solution: Reflexivity: Let 𝑥 ∈ 𝐸, then𝑓(𝑥) ∈ 𝐹, since 𝐹 is an ordered set we have 𝑓(𝑥) ≤


𝑓(𝑥) if and only if 𝑓 ≤ 𝑓 showing that (𝑀𝑎𝑝 (𝐸, 𝐹), ≤ ) is reflexive.
Antisymmetry: For any 𝑓, 𝑔 ∈ (𝑀𝑎𝑝 (𝐸, 𝐹), ≤ ) suppose 𝑓 ≤ 𝑔 and 𝑔 ≤ 𝑓; if and only if
𝑓(𝑥) ≤ 𝑔(𝑥) and 𝑔(𝑥) ≤ 𝑓(𝑥) for all 𝑥 ∈ 𝐸, since 𝑓(𝑥), 𝑔(𝑥) ∈ 𝐹 and 𝐹 is an ordered set
therefore by antisymmetry of 𝐹 we have 𝑓(𝑥) = 𝑔(𝑥) for any 𝑥 ∈ 𝐸, thus 𝑓 ≤ 𝑔 and𝑔 ≤ 𝑓
implies 𝑓 = 𝑔. Hence (𝑀𝑎𝑝 (𝐸, 𝐹), ≤ ) is antisymmetric.
Transitivity: For any 𝑓, 𝑔, ℎ ∈ (𝑀𝑎𝑝 (𝐸, 𝐹), ≤ ) suppose 𝑓 ≤ 𝑔 and 𝑔 ≤ ℎ if and only if
𝑓(𝑥) ≤ 𝑔(𝑥) and 𝑔(𝑥) ≤ ℎ(𝑥) for all 𝑥 ∈ 𝐸. Since 𝑓(𝑥), 𝑔(𝑥), ℎ(𝑥) ∈ 𝐹 and 𝐹 is an ordered
set therefore by the transitivity of 𝐹 we have for any 𝑥 ∈ 𝐸, 𝑓(𝑥) ≤ ℎ(𝑥) if and only if 𝑓 ≤ ℎ.
Thus 𝑓 ≤ 𝑔 and 𝑔 ≤ ℎ implies 𝑓 ≤ ℎ. Hence (𝑀𝑎𝑝 (𝐸, 𝐹), ≤ ) is transitive.
Thus 𝑀𝑎𝑝(𝐸, 𝐹) forms an ordered set with respect to the order defined above.

Definition 1.1.13: We say that elements 𝑥, 𝑦 of an ordered set (𝐸; ≤) are comparable if either
𝑥 ≤ 𝑦 or 𝑦 ≤ 𝑥. We denote this by writing 𝑥 ∥ 𝑦.

Definition 1.1.14: If all elements of an ordered set ( 𝐸, ≤) are comparable then we say that 𝐸
forms a chain or that ≤ is a total order.

Definition 1.1.15: We say 𝑥, 𝑦 ∈ (𝐸, ≤) are incomparable and write 𝑥 ∦ 𝑦, if neither 𝑥 ≰ 𝑦


nor 𝑦 ≰ 𝑥. If all pairs of distinct elements of 𝐸 are incomparable, then clearly ≤ is equality,
in which case we say that 𝐸 forms an antichain.

Example 1.1.16: The sets ℕ, ℤ, ℚ, ℝ of natural numbers, integers, rationals and real numbers
form chains under their usual orders of ≤.

Example 1.1.17: In Example 1.1.6, the singleton subsets of 𝑃(𝐸) form an antichain under the
inherited inclusion order.
Example 1.1.18: Let (𝑃1 ; ≤1 ) and (𝑃2 ; ≤2 ) be ordered sets we prove that the relation ≤ defined
on 𝑃1 × 𝑃2 by:
𝑥1 <1 𝑥2 ,
(𝑥1 , 𝑦1 ) ≤ (𝑥2 , 𝑦2 ) if and only if {
or 𝑥1 = 𝑥2 and 𝑦1 ≤2 𝑦2
is an order called the lexicographic order on 𝑃1 × 𝑃2 . We also show also that ≤ is total order
if and only if ≤1 and ≤2 are total orders.
Solution: Reflexivity: Suppose that (𝑥, 𝑦) ∈ 𝑃1 × 𝑃2 ,then𝑥 ∈ 𝑃1 and 𝑦 ∈ 𝑃2 , since 𝑃1 and 𝑃2
are ordered sets we have;
𝑥 <1 𝑥;
𝑥 ≤1 𝑥 and 𝑦 ≤2 𝑦 if and only if {
or 𝑥 = 𝑥 and 𝑦 ≤2 𝑦,
if and only if (𝑥, 𝑦) ≤ (𝑥, 𝑦).

Which shows that ≤ is reflexive.

Antisymmetry: Suppose that (𝑥1 , 𝑦1 ), (𝑥2 , 𝑦2 ) ∈ 𝑃1 × 𝑃2 be any elements of 𝑃1 × 𝑃2 .

Let (𝑥1 , 𝑦1 ) ≤ (𝑥2 , 𝑦2 ) and (𝑥2 , 𝑦2 ) ≤ (𝑥1 , 𝑦1 ). We show that (𝑥1 , 𝑦1 ) = (𝑥2 , 𝑦2 ).
𝑥1 <1 𝑥2 ;
Now (𝑥1 , 𝑦1 ) ≤ (𝑥2 , 𝑦2 ) if and only if {
or 𝑥1 = 𝑥2 and 𝑦1 ≤2 𝑦2 ,
𝑥2 <1 𝑥1 ;
and (𝑥2 , 𝑦2 ) ≤ (𝑥1 , 𝑦1 ) if and only if {
or 𝑥2 = 𝑥1 and 𝑦2 ≤2 𝑦1 ,

The following cases arise:


Case 1: 𝑥1 <1 𝑥2 and 𝑥2 <1 𝑥1 . By antisymmetry of (𝑃1 ; ≤1 ) 𝑥1 = 𝑥2 , which is not possible.
Case 2: 𝑥1 <1 𝑥2 and 𝑥2 = 𝑥1 and 𝑦2 ≤2 𝑦1, this cannot happen simultaneously, so we omit
this case also.
Case 3: 𝑥1 = 𝑥2 and 𝑦1 ≤2 𝑦2 and 𝑥2 <1 𝑥1 . Again, this cannot happen simultaneously, so we
didn’t consider this case.
So, we are only left with the following case:
Case 4: 𝑥1 = 𝑥2 and 𝑦1 ≤2 𝑦2 and 𝑥2 = 𝑥1 and 𝑦2 ≤2 𝑦1. Since (𝑃2 ; ≤2 ) is an ordered set we
conclude that 𝑥1 = 𝑥2 and 𝑦1 = 𝑦2 . Therefore (𝑥1 , 𝑦1 ) = (𝑥2 , 𝑦2 ).
So, we conclude from above cases that ≤ is antisymmetric.
Transitivity: Suppose (𝑥1 , 𝑦1 ), (𝑥2 , 𝑦2 ), (𝑥3 , 𝑦3 ) ∈ 𝑃1 × 𝑃2 and let (𝑥1 , 𝑦1 ) ≤ (𝑥2 , 𝑦2 ) and
(𝑥2 , 𝑦2 ) ≤ (𝑥3 , 𝑦3 ). We show that (𝑥1 , 𝑦1 ) ≤ (𝑥3 , 𝑦3 ).
𝑥1 <1 𝑥2 ;
Now (𝑥1 , 𝑦1 ) ≤ (𝑥2 , 𝑦2 ) if and only if {
or 𝑥1 = 𝑥2 and 𝑦1 ≤2 𝑦2 ,
𝑥2 <1 𝑥3 ;
and (𝑥2 , 𝑦2 ) ≤ (𝑥3 , 𝑦3 ) if and only if {
or 𝑥2 = 𝑥3 and 𝑦2 ≤2 𝑦3 .
The following cases arise:
Case 1: 𝑥1 <1 𝑥2 and 𝑥2 <1 𝑥3 . By the transitivity of (𝑃1 ; ≤1 ) this implies𝑥1 <1 𝑥3 . Therefore
by definition of ≤ we have (𝑥1 , 𝑦1 ) ≤ (𝑥3 , 𝑦3 ).
Case 2: 𝑥1 <1 𝑥2 and 𝑥2 = 𝑥3 and 𝑦2 ≤2 𝑦3 ; this implies 𝑥1 <1 𝑥3 and 𝑦2 ≤2 𝑦3 , this implies
(𝑥1 , 𝑦1 ) ≤ (𝑥3 , 𝑦3 ).
Case 3: 𝑥1 = 𝑥2 and 𝑦1 ≤2 𝑦2 and 𝑥2 <1 𝑥3 . This implies𝑥1 <1 𝑥3 and 𝑦1 ≤2 𝑦2. Therefore
by definition of ≤ we have (𝑥1 , 𝑦1 ) ≤ (𝑥3 , 𝑦3 ).
Case 4: 𝑥1 = 𝑥2 and 𝑦1 ≤2 𝑦2 and 𝑥2 = 𝑥3 and 𝑦2 ≤2 𝑦3 . Since (𝑃2 ; ≤2 ) is an ordered set we
conclude that 𝑥1 = 𝑥3 and 𝑦1 ≤2 𝑦3 . Therefore (𝑥1 , 𝑦1 ) ≤ (𝑥3 , 𝑦3 ).
So, we conclude from above cases that ≤ is transitive.
Now suppose that (𝑃1 × 𝑃2 ; ≤) is totally ordered, let (𝑥1 , 𝑦1 ), (𝑥2 , 𝑦2 ) ∈ 𝑃1 × 𝑃2 . Then either
(𝑥1 , 𝑦1 ) ≤ (𝑥2 , 𝑦2 ) or (𝑥2 , 𝑦2 ) ≤ (𝑥1 , 𝑦1 ). This implies that:
𝑥1 <1 𝑥2 ;
{ (i)
or 𝑥1 = 𝑥2 and 𝑦1 ≤2 𝑦2 ,
or
𝑥2 <1 𝑥1 ;
{ (ii)
or 𝑥2 = 𝑥1 and 𝑦2 ≤2 𝑦1 .
If (i) is true then either 𝑥1 <1 𝑥2 or 𝑥1 = 𝑥2 and 𝑦1 ≤2 𝑦2 . If 𝑥1 <1 𝑥2 this implies ≤1 is total
order and if 𝑥1 = 𝑥2 and 𝑦1 ≤2 𝑦2 this implies ≤2 is total order and same is the case with (ii).
Conversely suppose that ≤1 and ≤2 are total orders, then for all 𝑥1 , 𝑥2 ∈ 𝑃1 either 𝑥1 ≤1 𝑥2 or
𝑥2 ≤1 𝑥1 and for all 𝑦1 , 𝑦2 ∈ 𝑃2 either 𝑦1 ≤2 𝑦2 or 𝑦2 ≤2 𝑦1 . Here we have two cases:
Case 1: 𝑥1 ≤1 𝑥2 and 𝑦1 ≤2 𝑦2 . This implies (𝑥1 , 𝑦1 ) ≤ (𝑥2 , 𝑦2 ) (by definition of ≤).
Case 2: 𝑥2 ≤1 𝑥1 and 𝑦2 ≤2 𝑦1. This implies that (𝑥2 , 𝑦2 ) ≤ (𝑥1 , 𝑦1 ) (by definition of ≤).
Thus ≤ is a total order.

Example 1.1.19: Let 𝑃1 and 𝑃2 be disjoint sets if ≤1 is an order on 𝑃1 and≤2 is an order on 𝑃2


Prove that the following defines an order on 𝑃1 ∪ 𝑃2 .

𝑥, 𝑦 ∈ 𝑃1 and 𝑥 ≤1 𝑦;
𝑥 ≤ 𝑦 if and only if {
or 𝑥, 𝑦 ∈ 𝑃2 and 𝑥 ≤2 𝑦,

Solution: Reflexivity: Take any 𝑥 ∈ 𝑃1 ∪ 𝑃2 then 𝑥 ∈ 𝑃1 or 𝑥 ∈ 𝑃2 . If 𝑥 ∈ 𝑃1 , then by


reflexivity of ≤1 on 𝑃1 𝑥 ≤1 𝑥 and if 𝑥 ∈ 𝑃2 then again by reflexivity of ≤2 on 𝑃2 𝑥 ≤2 𝑥, in
either case by definition of ≤ we get 𝑥 ≤ 𝑥, proving the reflexivity of ≤.
Antisymmetry: Let 𝑥, 𝑦 ∈ 𝑃1 ∪ 𝑃2 and suppose 𝑥 ≤ 𝑦 and 𝑦 ≤ 𝑥. This implies:

𝑥, 𝑦 ∈ 𝑃1 and 𝑥 ≤1 𝑦;
{
or 𝑥, 𝑦 ∈ 𝑃2 and 𝑥 ≤2 𝑦,
and
𝑦, 𝑥 ∈ 𝑃1 and 𝑦 ≤1 𝑥;
{
or 𝑦, 𝑥 ∈ 𝑃2 and 𝑦 ≤2 𝑥,
The following cases can arise:
Case 1: 𝑥, 𝑦 ∈ 𝑃1 , 𝑥 ≤1 𝑦 and 𝑥, 𝑦 ∈ 𝑃1 , 𝑦 ≤1 𝑥, since (𝑃1 , ≤1 ) is an ordered set therefore by
antisymmetry of ≤1 we have 𝑥 = 𝑦.
Case 2: 𝑥, 𝑦 ∈ 𝑃1 , 𝑥 ≤1 𝑦 and 𝑦, 𝑥 ∈ 𝑃2 , 𝑦 ≤2 𝑥, since 𝑃1 and 𝑃2 are disjoint so this case is
not possible.
Case3: 𝑥, 𝑦 ∈ 𝑃2 , 𝑥 ≤2 𝑦 and 𝑥, 𝑦 ∈ 𝑃2 , 𝑦 ≤2 𝑥, since (𝑃2 , ≤2 ) is an ordered set, therefore
by antisymmetry of ≤2 , 𝑥 = 𝑦.
Case 4: 𝑥, 𝑦 ∈ 𝑃2 and 𝑥 ≤2 𝑦 and 𝑦, 𝑥 ∈ 𝑃1 𝑦 ≤1 𝑥, which is again not possible as 𝑃1 and 𝑃2
are disjoint.
Hence, we conclude that ≤ is antisymmetric.
Transitivity: Let 𝑥, 𝑦, 𝑧 ∈ 𝑃1 ∪ 𝑃2 and suppose 𝑥 ≤ 𝑦 and 𝑦 ≤ 𝑧. This implies:
𝑥, 𝑦 ∈ 𝑃1 and 𝑥 ≤1 𝑦;
{
or 𝑥, 𝑦 ∈ 𝑃2 and 𝑥 ≤2 𝑦,
and

𝑦, 𝑧 ∈ 𝑃1 and 𝑦 ≤1 𝑧;
{
or 𝑦, 𝑧 ∈ 𝑃2 and 𝑦 ≤2 𝑧,

we have to show that 𝑥 ≤ 𝑧. The following cases arise:


Case 1: 𝑥, 𝑦 ∈ 𝑃1 𝑥 ≤1 𝑦 and 𝑦, 𝑧 ∈ 𝑃1 and 𝑦 ≤1 𝑧. Since (𝑃1 ; ≤1 ) is an ordered set by
transitivity of ≤1 , we have 𝑥, 𝑧 ∈ 𝑃1 and 𝑥 ≤1 𝑧 and thus 𝑥 ≤ 𝑧.
Case 2: 𝑥, 𝑦 ∈ 𝑃1 and 𝑥 ≤1 𝑦 and 𝑦, 𝑧 ∈ 𝑃2 and 𝑦 ≤2 𝑧. But this implies𝑦 ∈ 𝑃1 ∩ 𝑃2 , which is
not possible as 𝑃1 ∩ 𝑃2 = ∅. Thus this case cannot arise.
Case 3: 𝑥, 𝑦 ∈ 𝑃2 and 𝑥 ≤2 𝑦 and 𝑦, 𝑧 ∈ 𝑃1 and 𝑦 ≤1 𝑧, but again this forces that 𝑦 ∈ 𝑃1 ∩ 𝑃2
which is again not possible as 𝑃1 ∩ 𝑃2 = ∅. So, we reject this case.
Case 4: 𝑥, 𝑦 ∈ 𝑃2 and 𝑥 ≤2 𝑦 and 𝑦, 𝑧 ∈ 𝑃2 and 𝑦 ≤2 𝑧, by transitivity of (𝑃2 ; ≤2 ) we have
𝑥, 𝑧 ∈ 𝑃2 and 𝑥 ≤2 𝑧 and so 𝑥 ≤ 𝑧.
Thus we conclude from above cases that ≤ is transitive. Thus 𝑃1 ∪ 𝑃2 is an ordered set under
the above defined order.

Example 1.1.20: Let 𝑃1 and 𝑃2 be disjoint sets, if ≤1 is an order on 𝑃1 and ≤2 is an order on 𝑃2.
We show that the following defines an order on 𝑃1 ∪ 𝑃2 :

𝑥, 𝑦 ∈ 𝑃1 and 𝑥 ≤1 𝑦,
𝑥 ≤ 𝑦 if and only if {or 𝑥, 𝑦 ∈ 𝑃2 and 𝑥 ≤2 𝑦,
or 𝑥 ∈ 𝑃1 and 𝑦 ∈ 𝑃2 .
The resulting ordered set is called the vertical sum or the linear sum of 𝑃1 and 𝑃2 and is denoted
by 𝑃1 ⨁𝑃2 .

Solution: Reflexivity: Take any 𝑥 ∈ 𝑃1 ∪ 𝑃2 , then 𝑥 ∈ 𝑃1 or 𝑥 ∈ 𝑃2 . Since (𝑃1 ; ≤1 ) and


(𝑃2 ; ≤2 ) are ordered sets we have 𝑥 ≤1 𝑥 or 𝑥 ≤2 𝑥; in either case we have 𝑥 ≤ 𝑥. So ≤ defined
above on 𝑃1 ∪ 𝑃2 is reflexive.
Antisymmetry: Suppose 𝑥, 𝑦 ∈ 𝑃1 ∪ 𝑃2 with 𝑥 ≤ 𝑦 and 𝑦 ≤ 𝑥 we have to show that 𝑥 = 𝑦.
𝑥, 𝑦 ∈ 𝑃1 and 𝑥 ≤1 𝑦,
Now 𝑥 ≤ 𝑦 if and only if {or 𝑥, 𝑦 ∈ 𝑃2 and 𝑥 ≤2 𝑦,
or 𝑥 ∈ 𝑃1 and 𝑦 ∈ 𝑃2 .
𝑥, 𝑦 ∈ 𝑃1 and 𝑦 ≤1 𝑥,
And 𝑦 ≤ 𝑥 if and only if {or 𝑥, 𝑦 ∈ 𝑃2 and 𝑦 ≤2 𝑥,
or 𝑦 ∈ 𝑃1 and 𝑥 ∈ 𝑃2 .
The following cases can arise:
Case 1: 𝑥, 𝑦 ∈ 𝑃1 and𝑥 ≤1 𝑦 and 𝑥, 𝑦 ∈ 𝑃1 and𝑦 ≤1 𝑥, thus antisymmetry of ≤1 on 𝑃1 forces
𝑥 = 𝑦.
Case 2: 𝑥, 𝑦 ∈ 𝑃1 and 𝑥 ≤1 𝑦 and 𝑥, 𝑦 ∈ 𝑃2 and 𝑦 ≤2 𝑥. But then 𝑥, 𝑦 ∈ 𝑃1 ∩ 𝑃2 ; which is not
possible as 𝑃1 and 𝑃2 are disjoint, so we omit this case.
Case 3 :𝑥, 𝑦 ∈ 𝑃1 and 𝑥 ≤1 𝑦 and 𝑦 ∈ 𝑃1 and 𝑥 ∈ 𝑃2 . But then 𝑥 ∈ 𝑃1 ∩ 𝑃2 which is again not
possible as 𝑃1 and 𝑃2 are disjoint, so we omit this case also.
Case 4: 𝑥, 𝑦 ∈ 𝑃2 and 𝑥 ≤2 𝑦 and 𝑥, 𝑦 ∈ 𝑃1 and 𝑦 ≤1 𝑥. But then again 𝑥, 𝑦 ∈ 𝑃1 ∩ 𝑃2 ;
which is also not possible as 𝑃1 and 𝑃2 are disjoint, so we again reject this case.
Case 5: 𝑥, 𝑦 ∈ 𝑃2 and 𝑥 ≤2 𝑦 and 𝑥, 𝑦 ∈ 𝑃2 and 𝑦 ≤2 𝑥, thus antisymmetry of ≤2 on 𝑃2 forces
𝑥 = 𝑦.
Case 6: 𝑥, 𝑦 ∈ 𝑃2 and 𝑥 ≤2 𝑦 and 𝑦 ∈ 𝑃1 and 𝑥 ∈ 𝑃2 which is again not possible as 𝑃1 and 𝑃2
are disjoint.
Case 7: 𝑥 ∈ 𝑃1 and 𝑦 ∈ 𝑃2 and 𝑦 ∈ 𝑃1 and 𝑥 ∈ 𝑃2 . But then 𝑥, 𝑦 ∈ 𝑃1 ∩ 𝑃2 which is not
possible as 𝑃1 and 𝑃2 are disjoint.
Case 8: 𝑥 ∈ 𝑃1 and 𝑦 ∈ 𝑃2 and 𝑥, 𝑦 ∈ 𝑃2 and 𝑦 ≤2 𝑥. But then 𝑥 ∈ 𝑃1 ∩ 𝑃2 which is not
possible.
Case 9: 𝑥 ∈ 𝑃1 and 𝑦 ∈ 𝑃2 and 𝑥, 𝑦 ∈ 𝑃1 and 𝑦 ≤1 𝑥. But then again 𝑥, 𝑦 ∈ 𝑃1 ∩ 𝑃2 which is
also not possible, so we reject this case.
So, we conclude from above cases that antisymmetry holds.
Transitivity: Let 𝑥, 𝑦, 𝑧 ∈ 𝑃1 ∪ 𝑃2 and suppose that 𝑥 ≤ 𝑦 and 𝑦 ≤ 𝑧 we have to show that
𝑥 ≤ 𝑧.
𝑥, 𝑦 ∈ 𝑃1 and 𝑥 ≤1 𝑦,
Now 𝑥 ≤ 𝑦 if and only if { or 𝑥, 𝑦 ∈ 𝑃2 and 𝑥 ≤2 𝑦,
or 𝑥 ∈ 𝑃1 and 𝑦 ∈ 𝑃2 ,
𝑦, 𝑧 ∈ 𝑃1 and 𝑦 ≤1 𝑧,
and 𝑦 ≤ 𝑧 if and only if { 𝑦, 𝑧 ∈ 𝑃2 and 𝑦 ≤2 𝑧,
or
or 𝑦 ∈ 𝑃1 and 𝑧 ∈ 𝑃2 .

Again following cases can arise:


Case 1: 𝑥, 𝑦 ∈ 𝑃1 and 𝑥 ≤1 𝑦 and 𝑦, 𝑧 ∈ 𝑃1 and 𝑦 ≤1 𝑧, thus transitivity of ≤1 on 𝑃1 forces
𝑥 ≤1 𝑧 and therefore 𝑥, 𝑧 ∈ 𝑃1 and 𝑥 ≤ 𝑧.
Case 2: 𝑥, 𝑦 ∈ 𝑃1 and 𝑥 ≤1 𝑦 and 𝑦, 𝑧 ∈ 𝑃2 and 𝑦 ≤2 𝑧. But then 𝑦 ∈ 𝑃1 ∩ 𝑃2 which is not
possible as 𝑃1 and 𝑃2 are disjoint, so we reject this case.
Case 3: 𝑥, 𝑦 ∈ 𝑃1 and 𝑥 ≤1 𝑦 and 𝑦 ∈ 𝑃1 and 𝑧 ∈ 𝑃2 . But then again 𝑦 ∈ 𝑃1 ∩ 𝑃2 which is also
not possible, so we reject this case also.
Case 4: 𝑥, 𝑦 ∈ 𝑃2 and 𝑥 ≤2 𝑦 and 𝑦, 𝑧 ∈ 𝑃1 and 𝑦 ≤1 𝑧. But then again 𝑦 ∈ 𝑃1 ∩ 𝑃2 which is
also not possible.
Case 5: 𝑥, 𝑦 ∈ 𝑃2 and 𝑥 ≤2 𝑦 and 𝑦, 𝑧 ∈ 𝑃2 and 𝑦 ≤2 𝑧, thus transitivity of ≤2 on 𝑃2 forces
𝑥 ≤2 𝑧 and therefore 𝑥, 𝑧 ∈ 𝑃1 and 𝑥 ≤ 𝑧.
Case 6: 𝑥, 𝑦 ∈ 𝑃2 and 𝑥 ≤2 𝑦 and 𝑦 ∈ 𝑃1 and 𝑧 ∈ 𝑃2 . This implies that 𝑦 ∈ 𝑃1 ∩ 𝑃2 which is
not possible as 𝑃1 and 𝑃2 are disjoint, so we reject this case.
Case 7: 𝑥 ∈ 𝑃1 and 𝑦 ∈ 𝑃2 and 𝑦, 𝑧 ∈ 𝑃1 and 𝑦 ≤1 𝑧. But then 𝑦 ∈ 𝑃1 ∩ 𝑃2 which is again not
possible, so we reject this case also.
Case 8: 𝑥 ∈ 𝑃1 and 𝑦 ∈ 𝑃2 and 𝑦, 𝑧 ∈ 𝑃2 and 𝑦 ≤2 𝑧. But then 𝑥 ∈ 𝑃1 and 𝑧 ∈ 𝑃2 , so by
definition 𝑥 ≤ 𝑧.
Case 9:𝑥 ∈ 𝑃1 and 𝑦 ∈ 𝑃2 and 𝑦 ∈ 𝑃1 and 𝑧 ∈ 𝑃2 . But this implies that 𝑦 ∈ 𝑃1 ∩ 𝑃2 which is
also not possible as 𝑃1 and 𝑃2 are disjoint, so we reject this case also.
So, from above all cases we conclude that transitivity holds.

Next result shows that the notion of order can be carried from any relation to its dual.
Theorem 1.1.21: If 𝑅 is an order on 𝐸 then so is its dual.

Proof: Suppose that 𝑅 is an order on 𝐸. We have to show that 𝑅 𝑑 is also an order on 𝐸.


Reflexivity: Let 𝑥 ∈ 𝐸 then (𝑥, 𝑥) ∈ 𝑅 if and only if (𝑥, 𝑥) ∈ 𝑅 𝑑 .
Antisymmetry: Let 𝑥, 𝑦 ∈ 𝐸 and (𝑥, 𝑦) ∈ 𝑅 𝑑 and (𝑦, 𝑥) ∈ 𝑅 𝑑 .
Now(𝑥, 𝑦) ∈ 𝑅 𝑑 if and only if (𝑦, 𝑥) ∈ 𝑅 and (𝑦, 𝑥) ∈ 𝑅 𝑑 if and only if (𝑥, 𝑦) ∈ 𝑅, since 𝑅 is
antisymmetric, we have 𝑥 = 𝑦.
Transitivity: Suppose 𝑥, 𝑦, 𝑧 ∈ 𝐸 such that (𝑥, 𝑦) ∈ 𝑅 𝑑 and ( 𝑦, 𝑧) ∈ 𝑅 𝑑 , we have to show
(𝑥, 𝑧) ∈ 𝑅 𝑑 . Now (𝑥, 𝑦) ∈ 𝑅 𝑑 if and only if (𝑦, 𝑥) ∈ 𝑅 and (𝑦, 𝑧) ∈ 𝑅 𝑑 if and only if (𝑧, 𝑦) ∈
𝑅.
This gives that (𝑥, 𝑦), (𝑦, 𝑧) ∈ 𝑅 𝑑 if and only if (𝑦, 𝑥), (𝑧, 𝑦) ∈ 𝑅. Since 𝐸 is an ordered set,
this implies that (𝑧, 𝑥) ∈ 𝑅 if and only if (𝑥, 𝑧) ∈ 𝑅 𝑑 . Thus transitivity holds, which proves
the theorem.
Notation: We shall denote the dual of an order ≤ on 𝐸 by the symbol ≥ which we read as
“greater than or equal to”. Then the ordered set (𝐸; ≥) is called the dual of (𝐸; ≤ ) and is often
written as 𝐸 𝑑 .
As a consequence of (Theorem 1.1.21) we can assert that to every statement that concerns an
order on a set 𝐸 there is a dual statement that concerns the corresponding dual order on 𝐸.
Principle of Duality: To every theorem that concerns an ordered set 𝐸, there is a corresponding
theorem that concerns dual ordered set 𝐸 𝑑 . This is obtained by replacing each statement that
involves ≤, explicitly by its dual.

Definition 1.1.22: If(𝐸; ≤ ) is an ordered set, then by the top element or maximum element or
greatest element of 𝐸, we mean an element 𝑥 ∈ 𝐸 such that 𝑦 ≤ 𝑥 for every 𝑦 ∈ 𝐸.

Note: A top element when it exists is unique, in fact if 𝑥 and𝑦 are both top elements of 𝐸 then
on the one hand 𝑦 ≤ 𝑥 and on the other hand 𝑥 ≤ 𝑦, whence by the antisymmetry of ≤ we
have 𝑥 = 𝑦.

Definition 1.1.23: By a bottom element or minimum element we mean an element 𝑧 ∈ 𝐸 such


that 𝑧 ≤ 𝑦 for every 𝑦 ∈ 𝐸.

Note: A bottom element when it exists is unique, in fact if 𝑥 and 𝑧 are both bottom elements
of 𝐸, then on the one hand we have 𝑥 ≥ 𝑧 and on the other hand, we have 𝑧 ≥ 𝑥, whence by
the antisymmetry of ≥ we have 𝑥 = 𝑦.

Definition 1.1.24: An ordered set that has both a top element and bottom element is said to be
bounded.

Note: We shall use the notation 𝑥 < 𝑦 to mean 𝑥 ≤ 𝑦 and 𝑥 ≠ 𝑦. Note that relation < thus
defined is transitive but is not an order, since it fails to be reflexive. In other words, strict
inclusion is characterized by the antisymmetric and transitive laws.

Lemma 1.1.25: Let (𝐸, ≤) be a ordered set and 𝑥1 , 𝑥2 , … , 𝑥𝑛 ∈ 𝐸 and if 𝑥1 ≤ 𝑥2 ≤ ⋯ ≤ 𝑥𝑛 ≤


𝑥1 then 𝑥1 = 𝑥2 = ⋯ = 𝑥𝑛 .
Proof: Suppose 𝑥1 ≤ 𝑥2 ≤ ⋯ ≤ 𝑥𝑛 ≤ 𝑥1 . Now since ≤ is an order and the dual of an order
is also an order so 𝑥1 ≥ 𝑥2 ≥ 𝑥3 ≥ ⋯ ≥ 𝑥𝑛 ≥ 𝑥1 . From these two we get 𝑥1 = 𝑥2 = ⋯ = 𝑥𝑛 .

Definition 1.1.26: In an ordered set (𝐸; ≤ ) we say that 𝑥 is covered by 𝑦 (or that 𝑦 covers) if
𝑥 < 𝑦 and there is no 𝑎 ∈ 𝐸 such that 𝑥 < 𝑎 < 𝑦. We denote this by 𝑥 ≺ 𝑦 The set of pairs
(𝑥, 𝑦) such that 𝑦 covers 𝑥 is called the covering relation of (𝐸; ≤ ).

Example 1.1.27: The covering relation of the partial ordering {(𝑎, 𝑏): ‘𝑎 divides 𝑏’} on
{1, 2, 3, 4, 6, 12} are the following sets:
(1,2), (2,4), (2,6), (3,6), (4,12).

1.2 Hasse diagrams

This is the important section to deal with partial ordered sets. In this section we are going to
see how an order on any set can be represented by diagrams. They are called Hasse diagrams
and are defined as below.

Definition 1.2.1: A diagram representing an ordered set is called a Hasse diagram if:
(1) elements are represented by points and for any two elements 𝑥, 𝑦 if 𝑥 ≤ 𝑦, then they are
joined by increasing line segment.

(2)Following procedure is used while drawing Hasse diagram of an ordered set 𝑆:


(i) Since the partial ordering is reflexive, so each vertex of 𝑆 must be related to itself.
Consequently, for convenience all cycles are deleted in a Hasse diagram.
(ii) Since the partial ordering is transitive, thus if 𝑎 ≤ 𝑏 and 𝑏 ≤ 𝑐, it follows that 𝑎 ≤ 𝑐.
In
(iii) If a vertex 𝑎 is connected to vertex 𝑏 that is 𝑎 ≤ 𝑏, then vertex 𝑏 appears to be an
immediate successor of 𝑎. Thus the arrows may be omitted.

Example 1.2.2: Let 𝐸 = {1,2,3,4,6,12} be the set of positive integral divisors of 12. Then the
Hasse diagram of (𝐸; ≤) where ≤ is divisibility order on 𝐸 is as follows.

Example 1.2.3:Let 𝑋 = {𝑎, 𝑏, 𝑐} and 𝐸 = ℙ(𝑋), then the Hasse diagram of (𝐸; ≤) where ≤ is
inclusion order on ℙ(𝑋) can be drawn as below:

The Hasse diagram of 𝐸 𝑑 = (𝐸; ≤) where ≤ is ⊇ is obtained by turning the above diagram
upside down and is as follows:
Example 1.2.4: We draw the Hasse diagram on sets of 3,4 and 5 elements by taking different
orders on them.

Solution: Set of 3 elements:


(i) Let 𝐴 = {𝑎, 𝑏, 𝑐}.
If we take order ≤ on 𝐴 then we get the Hasse diagram as below;

(ii) If we take set of positive integral divisors of 4 and ordered it by divisibility then we obtain
Hasse diagram as;

Set of 4 elements:
(i) {𝑎, 𝑏, 𝑐, 𝑑} under usual order;

(ii). Set of positive integral divisors of 6, when ordered by divisibility;


Set of 5 elements:
(i).{𝑎, 𝑏, 𝑐, 𝑑, 𝑒, } with usual order:

(ii). Set of positive integral divisors of 16 if ordered by divisibility;

Example 1.2.5: The Hasse diagram for the set of positive integral divisors of 210 when ordered
by divisibility is given below:
Solution: The set of positive divisors of 210 is given as below;
𝑆 = {1,2,3,5,7,10,14,15,21,30,35,42,70,105,210},

Hasse diagram of above set when ordered by divisibility is given by;


Example 1.2.6: Let 𝑃1 and 𝑃2 be the ordered sets with Hasse diagrams;

We draw the Hasse diagrams of 𝑃1 × 𝑃2 and 𝑃2 × 𝑃1 under the Cartesian order.

Solution: Since 𝑃1 = {𝑎, 𝑏, 𝑐} and 𝑃2 = {𝑥, 𝑦}, therefore;


𝑃1 × 𝑃2 = {(𝑎, 𝑥), (𝑎, 𝑦), (𝑏, 𝑥), (𝑏, 𝑦), (𝑐, 𝑥), (𝑐, 𝑦)} and it’s Hasse diagram under Cartesian
order is given by;

Now, 𝑃2 × 𝑃1 = {(𝑥, 𝑎), (𝑥, 𝑏), (𝑥, 𝑐), (𝑦, 𝑎), (𝑦, 𝑏), (𝑦, 𝑐)} and its Hasse diagram under
Cartesian order is given by;

From the above we conclude that both 𝑃1 × 𝑃2 and 𝑃2 × 𝑃1 have same Hasse diagram, except
for the order of the components of vertices.

1.3 Order Preserving and Order Reversing Mappings


Order sets can be related to each other in different ways. In this section we will have look at
how ordered sets can be related to each other by defining maps between them, in such a way
so that their order remains unaltered or altered, which will be given special name accordingly.
The section ends with some important results on order preserving mappings.

Definition 1.3.1: If (𝐴, ≤1 ) and (𝐵, ≤2 ) are ordered sets, then we say that a mapping 𝑓: 𝐴 → 𝐵
is isotone (or monotone or order preserving) if;
for all 𝑥, 𝑦 ∈ 𝐴 𝑥 ≤1 𝑦 implies 𝑓(𝑥) ≤2 𝑓(𝑦)
and is antitone (or inverting or order reversing) if;
for all 𝑥, 𝑦 ∈ 𝐴 𝑥 ≤1 𝑦 implies 𝑓(𝑥) ≥2 𝑓(𝑦).

Example 1.3.2: If 𝐸 is a non-empty set and 𝐴  𝐸 then 𝑓𝐴 : 𝑃(𝐸) → 𝑃(𝐸) given by𝑓𝐴 (𝑋) =
𝐴 ∩ 𝑋 is isotone and If 𝑋´ is the complement of 𝑋 in 𝐸, then the assignment 𝑋 ↦ 𝑋´ defines
an anti-tone mapping on 𝑃(𝐸).

Solution: Let 𝑋, 𝑌 ∈ 𝑃(𝐸) such that 𝑋  𝑌; we show that 𝑓𝐴 (𝑋)  𝑓𝐴 (𝑌) i.e., 𝐴 ∩ 𝑋  𝐴 ∩ 𝑌.
Let 𝑥 ∈ 𝐴 ∩ 𝑋 then 𝑥 ∈ 𝐴 and 𝑥 ∈ 𝑋; therefore 𝑥 ∈ 𝐴 and 𝑥 ∈ 𝑌 (𝑋  𝑌), this implies 𝑥 ∈
𝐴 ∩ 𝑌. Thus 𝑓𝐴 (𝑋)  𝑓𝐴 (𝑌) showing that 𝑓𝐴 is isotone.
Now we show that 𝑓(𝑋) = 𝑋´ is antitone.
Let 𝑋, 𝑌 ∈ 𝑃(𝐸) such that 𝑋  𝑌, we have to show that 𝑌´  𝑋´.
Let 𝑥 ∈ 𝑌´ then 𝑥 ∉ 𝑌 therefore 𝑥 ∉ 𝑋 (𝑋  𝑌) this implies 𝑥 ∈ 𝑋´. Thus 𝑌´ 𝑋´, showing
that 𝑓 is antitone.

Example 1.3.3: Given 𝑓: 𝐸 → 𝐹consider the induced direct image map 𝑓 → ∶ 𝑃(𝐸) → 𝑃(𝐹)
defined for every 𝑋  𝐸 by 𝑓 → (𝑋) = {𝑓(𝑥) | 𝑥 ∈ 𝑋} andthe induced inverse image map
𝑓 ← : 𝑃(𝐹) → 𝑃(𝐸) defined for every 𝑌  𝐹 by 𝑓 ← (𝑌) = {𝑥 ∈ 𝐸 | 𝑓 (𝑥) ∈ 𝑌 }. Each of these
mappings is isotone.

Solution: Let 𝐶, 𝐷 ∈ 𝑃(𝐸) such that 𝐶 ⊆ 𝐷. We claim that 𝑓 → (𝐶) ⊆ 𝑓 → (𝐷).


We have by definition;
𝑓 → (𝐶) = { 𝑓 (𝑥) | 𝑥 ∈ 𝐶} and 𝑓 → (𝐷) = { 𝑓 (𝑦) | 𝑦 ∈ 𝐷}.
Now let 𝑓(𝑥) ∈ 𝑓 → (𝐶) for all 𝑥 ∈ 𝐶; since 𝐶 ⊆ 𝐷; therefore for all 𝑥 ∈ 𝐶 we have 𝑥 ∈ 𝐷;
therefore by definition 𝑓(𝑥) ∈ 𝑓 → (𝐷). Thus 𝐶 ⊆ 𝐷 gives 𝑓 → (𝐶) ⊆ 𝑓 → (𝐷), proving the
claim.
Now we show that 𝑓 ← : 𝑃(𝐹) → 𝑃(𝐸) defined for every 𝑌  𝐹 by;
𝑓 ← (𝑌) = {𝑥 ∈ 𝐸 | 𝑓(𝑥) ∈ 𝑌 } is isotone.
Let 𝑋, 𝑌 ∈ 𝑃(𝐹) such that 𝑋 ⊆ 𝑌, we have to show that 𝑓 ← (𝑋) ⊆ 𝑓 ← (𝑌).
By definition 𝑓 ← (𝑋) = {𝑥 ∈ 𝐸 | 𝑓(𝑥) ∈ 𝑋 } and 𝑓 ← (𝑌) = {𝑦 ∈ 𝐸 | 𝑓(𝑦) ∈ 𝑌 }.
Let 𝑧 ∈ 𝑓 ← (𝑋), then by definition 𝑧 ∈ 𝐸 such that 𝑓(𝑧) ∈ 𝑋 or 𝑧 ∈ 𝐸 such that 𝑓(𝑧) ∈ 𝑌 (as
𝑋 ⊆ 𝑌) then 𝑧 ∈ 𝑓 ← (𝑌).Thus 𝑓 ← (𝑋) ⊆ 𝑓 ← (𝑌), showing that 𝑓 ← is isotone.

We shall now give a natural interpretation of isotone mappings. For this purpose, we require
the following notations.

Definition 1.3.4:(i) By a down-set of an ordered set 𝐸 we shall mean a subset 𝐷 of 𝐸, with the
property that if 𝑥 ∈ 𝐷 and 𝑦 ∈ 𝐸 is such that 𝑦 ≤ 𝑥, then 𝑦 ∈ 𝐷. We include the empty subset
of 𝐸 as down set.By a principaldown-set we shall mean a down set of the form;
𝑥 ↓ = {𝑦 ∈ 𝐸 | 𝑦 ≤ 𝑥}. i.e, the down-set of 𝐸 generated by 𝑥.
(ii) By an up-set of an ordered set 𝐸 we shall mean a subset 𝑈 of 𝐸 with the property that if
𝑥 ∈ 𝑈 and 𝑦 ∈ 𝐸 is such that 𝑦 ≥ 𝑥, then 𝑦 ∈ 𝑈 and the principal up-set is an up-set of the
form;
𝑥 ↑ = {𝑦 ∈ 𝐸 | 𝑦 ≥ 𝑥}. i.e, the up-set of 𝐸 generated by 𝑥.
Example 1.3.5: In the chain 𝑄 + of positive rational numbers the set 𝐷 = {𝑞 ∈ 𝑄 + | 𝑞 2 ≤ 2} is
a down-set that is not principal.

Solution: Clearly 𝐷 ⊆ 𝑄 + , let 𝑥 ∈ 𝐷, this gives 𝑥 2 ≤ 2 and let 𝑦 ∈ 𝑄 + such that 𝑦 ≤ 𝑥, this
implies 𝑦 2 ≤ 𝑥 2 ≤ 2 so 𝑦 2 ≤ 2 which gives 𝑦 ∈ 𝐷.Thus 𝐷 is a down-set. Now suppose to
the contrary that 𝐷 is a principle down-set. Then by definition we have for some 𝑥0 ∈ 𝑄 +
𝐷 = {𝑞 ∈ 𝑄 + : 𝑞 2 ≤ 2} = 𝑥0↓
= {𝑦 ∈ 𝑄 + : 𝑦 ≤ 𝑥0 }
= {𝑦 ∈ 𝑄 + : 𝑦 2 ≤ 𝑥02 }.
Since 𝑥02 ≰ 2 for all 𝑥0 ∈ 𝑄. This implies 𝐷 ≠ 𝑥0↓ for any 𝑥0 ∈ 𝑄, which is a contradiction.
Thus 𝐷 is not principal down-set.

The next result shows that the intersection and union of two down-sets is again a down-set.

Proposition 1.3.6: If 𝐴 and 𝐵 are down sets of an ordered set 𝐸 then so are 𝐴 ∩ 𝐵 and 𝐴 ∪ 𝐵.
Proof: Let 𝐴 and 𝐵 are down-sets of an ordered set 𝐸 and let 𝑥 ∈ 𝐴 ∩ 𝐵, 𝑦 ∈ 𝐸 with 𝑦 ≤ 𝑥.
Then 𝑥 ∈ 𝐴 and 𝑥 ∈ 𝐵, 𝑦 ∈ 𝐸 with 𝑦 ≤ 𝑥. This implies 𝑥 ∈ 𝐴 and 𝑦 ∈ 𝐸 with 𝑦 ≤ 𝑥 and 𝑥 ∈
𝐵, 𝑦 ∈ 𝐸 with 𝑦 ≤ 𝑥. Since 𝐴 is a down-set this implies 𝑦 ∈ 𝐴. Also since 𝐵 is a down-set so
𝑦 ∈ 𝐵, thus 𝑦 ∈ 𝐴 ∩ 𝐵. Which shows that 𝐴 ∩ 𝐵 is a down-set.
Now we show 𝐴 ∪ 𝐵 is a down-set. Let 𝑥 ∈ 𝐴 ∪ 𝐵, 𝑦 ∈ 𝐸 with 𝑦 ≤ 𝑥. Then 𝑥 ∈ 𝐴 or 𝑥 ∈ 𝐵,
𝑦 ∈ 𝐸 with 𝑦 ≤ 𝑥. This implies 𝑥 ∈ 𝐴, 𝑦 ∈ 𝐸 with 𝑦 ≤ 𝑥 or 𝑥 ∈ 𝐵, 𝑦 ∈ 𝐸 with 𝑦 ≤ 𝑥. Since 𝐴
is a down-set this implies 𝑦 ∈ 𝐴 or also since 𝐵 is a down-set so 𝑦 ∈ 𝐵, thus 𝑦 ∈ 𝐴 ∪ 𝐵 showing
that 𝐴 ∪ 𝐵 is a down-set.

Note: The above result is not true in general for principal down-sets. For example, in

we have here 𝑐 ↓ ∩ 𝑑↓ = {𝑎, 𝑐} ∩ {𝑏, 𝑑} = {𝑎, 𝑏} = 𝑎↓ ∪ 𝑏 ↓ .


Isotone mapping are characterized by the following properties:

Theorem 1.3.7: If 𝐸 and 𝐹 are ordered sets and if 𝑓 ∶ 𝐸 → 𝐹 is any mapping then the following
statement are equivalent:
(1)𝑓 is isotone;
(2) The inverse image of every principal down-set of 𝐹 is a down-set of 𝐸;
(3) The inverse image of every principal up-set of 𝐹 is an up-set of 𝐸.

Proof: (1) ⇒ (2): Suppose that 𝑓 ∶ 𝐸 → 𝐹 is isotone and let 𝑦 ∈ 𝐹 and let 𝐴 = 𝑓 ← (𝑦 ↓ ) then;
𝐴 = {𝑥 ∈ 𝐸 𝑓(𝑥) ∈ 𝑦 ↓ }. (i)
Now since 𝑓(𝑥) ∈ 𝑦 ↓ we have𝑓(𝑥) ≤ 𝑦. If 𝐴 is empty, then 𝐴 is clearly a down-set, suppose
that 𝐴 is non-empty and let 𝑥 ∈ 𝐴 then for every 𝑧 ∈ 𝐸 with 𝑧 ≤ 𝑥 we have;
𝑓(𝑧) ≤ 𝑓(𝑥) ≤ 𝑦 (since 𝑓 is isotone).
Thus 𝑓(𝑧) ≤ 𝑦 (by transitivity).This implies f (z) ∈ 𝑦 ↓ for all 𝑧 ∈ 𝐸. Thus we have 𝑧 ∈ 𝐴
(by (i)). So, by definition 𝐴 is a down-set of 𝐸.
(2) ⇒ (1): For any 𝑥 ∈ 𝐸 we have 𝑓 (𝑥) ∈ (𝑓 (𝑥)) ↓ , therefore 𝑥 ∈ 𝑓 ← (𝑓 (𝑥) ↓ ). By (2) this
is a down-set of 𝐸, so if 𝑦 ∈ 𝐸 such that 𝑦 ≤ 𝑥 then we have 𝑦 ∈ 𝑓 ← (𝑓 (𝑥)↓ ); which implies 𝑓
(y) ∈ (𝑓 (𝑥)) ↓, so by definition 𝑓 (𝑦) ≤ 𝑓 (𝑥). Thus 𝑓is isotone.
(1) ⇔ (3): This follows from above by the principal of duality.

1.4 Residuated Mappings

In view of the above natural result we now investigate under what conditions the inverse image
of the principal down-set is also a principal down-set. The outcome will be the type of mapping
that will play an important role in the sequel.

Theorem 1.4.1: If 𝐸 and 𝐹 are ordered sets, then the following conditions concerning𝑓 ∶ 𝐸 →
𝐹 are equivalent.
(1) The inverse image under f of every principal down-set of 𝐹 is a principal down-set of
𝐸.
(2)𝑓is isotone and there is an isotone mapping 𝑔: 𝐹 → 𝐸 such that 𝑔 ∘ 𝑓 ≥ 𝑖𝑑𝐸 and 𝑓 ∘ 𝑔 ≤
𝑖𝑑𝐹 .

Proof: (1) ⇒ (2): If (1) holds, then by the Theorem(1.3.7)𝑓 is isotone, which means that for
all 𝑦 ∈ 𝐹 there exists 𝑥 ∈ 𝐸 such that 𝑓 ← (𝑦 ↓ ) = 𝑥 ↓ , that is {𝑥 ∈ 𝐸 | 𝑓(𝑥) ∈ 𝑦 ↓ } = 𝑥 ↓ ; which
implies {𝑥 ∈ 𝐸 | 𝑓(𝑥) ≤ 𝑦} = 𝑥 ↓ .
Claim: For every 𝑦 ∈ 𝐸, this element 𝑥 is unique, if possible suppose 𝑥0 ∈ 𝐸 such that 𝑓 ← (𝑦 ↓ )
= 𝑥0↓ , which implies { 𝑥0 ∈ 𝐸 | 𝑓 (𝑥0 ) ≤ 𝑦 } = 𝑥0↓ . Since 𝑓(𝑥) ≤ 𝑦 for all 𝑦 ∈ 𝐸 and 𝑓(𝑥0 ) ≤
𝑦 for all 𝑦 ∈ 𝐹 we have 𝑥 ≤ 𝑓 ← (𝑦) and 𝑥0 ≤ 𝑓 ← (𝑦). Which is only possible if 𝑥 = 𝑥0 . So
we can define a mapping 𝑔: 𝐹 → 𝐸 by setting 𝑔(𝑦) = 𝑥.
Claim: 𝑔: 𝐹 → 𝐸 defined as above is isotone:
For this let 𝑦1 , 𝑦2 ∈ 𝐹 with 𝑦1 ≤ 𝑦2 . We will show that 𝑔(𝑦1 ) ≤ 𝑔(𝑦2 ) that is 𝑥1 ≤ 𝑥2 .
Let 𝑥 ∈ 𝑦1↓ then by definition 𝑥 ≤ 𝑦1 ≤ 𝑦2 (as 𝑦1 ≤ 𝑦2 ) or 𝑥 ≤ 𝑦2 , which implies 𝑥 ∈ 𝑦2↓ so
𝑦1↓ ⊆ 𝑦2↓ . Since 𝑓 ← is isotone, we have 𝑓 ← (𝑦1↓ ) ⊆ 𝑓 ← (𝑦2↓ ) which implies 𝑥1↓ ⊆ 𝑥2↓ .
Now𝑥1 ∈ 𝑥1↓ ⊆ 𝑥2↓ , then 𝑥1 ∈ 𝑥2↓ , so 𝑥1 ≤ 𝑥2 (by definition of down-set), therefore 𝑔(𝑦1 ) ≤
𝑔(𝑦2 ). From this mapping 𝑔 we have 𝑔(𝑦) ∈ 𝑔(𝑦)↓ = 𝑥 ↓ = 𝑓 ← (𝑦 ↓ ) this gives 𝑔(𝑦) ∈
𝑓 ← (𝑦 ↓ ), which implies 𝑓(𝑔(𝑦)) ∈ 𝑦 ↓ . So𝑓(𝑔(𝑦)) ≤ 𝑦 for all 𝑦 ∈ 𝐸. Thus 𝑔 ∘ 𝑓 ≥ 𝑖𝑑𝐸 . Thus
(1) ⇒ (2) holds.
(2) ⇒ (1): Suppose that (2) holds, then for all 𝑓(𝑥), 𝑦 ∈ 𝐹 with 𝑓(𝑥) ≤ 𝑦 we have;
𝑥 ≤ 𝑔{𝑓(𝑥)} ≤ 𝑔(𝑦) (since 𝑔 is isotone).
Also for all 𝑥, 𝑔(𝑦) ∈ 𝐸, we have (since 𝑓 is isotone) 𝑥 ≤ 𝑔(𝑦) = 𝑓(𝑥) ≤ 𝑓(𝑔(𝑦)) ≤ 𝑦.
It follows from the above observations that 𝑓(𝑥) ≤ 𝑦 if and only if 𝑥 ≤ 𝑔(𝑦); which implies
𝑓(𝑥) ∈ 𝑦 ↓ if and only if 𝑥 ∈ 𝑔(y) ↓ or 𝑥 ∈ 𝑓 ← (𝑦 ↓ ) if and only if 𝑥 ∈ 𝑔(𝑦)↓ ; which gives 𝑓 ←
(𝑦 ↓ ) = (𝑔(y)) ↓. Thus (2) ⇒ (1) holds.

Definition 1.4.2: A mapping 𝑓: 𝐸 → 𝐹 that satisfies either of the equivalent conditions of


Theorem 1.4.1 is said to be residuated.

Note: In particular if 𝑓: 𝐸 → 𝐹 is a residuated mapping, then as the isotone mapping 𝑔: 𝐹 → 𝐸


satisfies 𝑔 ∘ 𝑓 ≥ 𝑖𝑑𝐸 and 𝑓 ∘ 𝑔 ≤ 𝑖𝑑𝐹 . We can show that 𝑔 is unique, to see this suppose
𝑔and𝑔∗ are two isotone mapping satisfying above properties then:
𝑔 = 𝑖𝑑𝐸 ∘ 𝑔 ≤ (𝑔∗ ∘ 𝑓) ∘ 𝑔 = 𝑔∗ ∘ (𝑓 ∘ 𝑔) ≤ 𝑔∗ ∘ 𝑖𝑑𝐹 = 𝑔∗ .
Which implies 𝑔 ≤ 𝑔∗ .
Similarly, 𝑔∗ = 𝑔∗ ∘ 𝑖𝑑𝐸 ≤ 𝑔∗ ∘ (𝑔 ∘ 𝑓) = (𝑓𝑜𝑔∗ ) ∘ 𝑔. It follows that 𝑔∗ ≤ 𝑔 and so 𝑔 =
𝑔∗ .
We shall denote this unique 𝑔 by 𝑓 + and call it the residual of 𝑓. Thus 𝑓 + ∘ 𝑓 ≥ 𝑖𝑑𝐸 and 𝑓 ∘
𝑓 + ≤ 𝑖𝑑𝐹 .

Example 1.4.3: If 𝑓: 𝐸 → 𝐹 then the direct image map 𝑓 → ∶ 𝑃(𝐸) → 𝑃(𝐹) is residuated with
residual 𝑓 + = 𝑓 ← ∶ 𝑃(𝐹) → 𝑃(𝐸).

Solution: We are given 𝑓: 𝐸 → 𝐹 and we know that 𝑓 → ∶ 𝑃(𝐸) → 𝑃(𝐹) is defined for every
𝑋 ⊆ 𝐸 by
𝑓 → (𝑋) = {𝑓(𝑥)| 𝑥 ∈ 𝑋}. (i)
Also 𝑓 + = 𝑓 ← ∶ 𝑃(𝐹) → 𝑃(𝐸) is defined for every 𝑌 ⊆ 𝐹 by
𝑓 ← (𝑌) = {𝑥 ∈ 𝐸 | 𝑓(𝑥) ∈ 𝑌}. (ii)
We have to show that 𝑓 + = 𝑓 ← is the residual of 𝑓 → or in other words 𝑓 → is residuated.
Now for any 𝑋 ∈ 𝑃(𝐸) we have (𝑓 ← ∘ 𝑓 → )(𝑋) = 𝑓 ← (𝑓 → (𝑋))
= 𝑓 ← ({𝑓(𝑥)| 𝑥 ∈ 𝑋}) (by (i))
⊇ 𝑋.
So, from above we get 𝑓 ← ∘ 𝑓 → ≥ 𝑖𝑑𝑃(𝐸) . Similarly, we can show that 𝑓 → ∘ 𝑓 ← ≤ 𝑖𝑑𝑃(𝐹) ,
therefore by definition 𝑓 ← is residuated with residual 𝑓 + = 𝑓 ← . This establishes the claim and
hence the result holds.

Example 1.4.4: If 𝐸 is any set and 𝐴 ⊆ 𝐸 then 𝜆𝐴 : 𝑃(𝐸) → 𝑃(𝐸) defined by 𝜆𝐴 (𝑋) = 𝐴 ∩ 𝑋 is
residuated with residual 𝜆𝐴+ given by 𝜆𝐴+ (𝑌) = 𝑌 ∪ 𝐴´.

Solution: We are given 𝜆𝐴 : 𝑃(𝐸) → 𝑃(𝐸) defied by 𝜆𝐴 (𝑋) = 𝐴 ∩ 𝑋. (i)


We have to show that 𝜆𝐴 is residuated with residual 𝜆𝐴+ defined as below:
𝜆𝐴+ (𝑌) = 𝑌 ∪ 𝐴. ´ (ii)
Now we have for any 𝑋 ∈ 𝑃(𝐸) 𝜆𝐴 ∘ 𝜆𝐴+ (𝑋) = 𝜆𝐴 (𝜆𝐴+ (𝑋))
= 𝜆𝐴 (𝑋 ∪ 𝐴´)(by (ii))
= 𝐴 ∩ ( 𝑋 ∪ 𝐴´) (by (i))
= (𝐴 ∩ 𝑋) ∪ (𝐴 ∩ 𝐴´)
= (𝐴 ∩ 𝑋) ∪ ∅
= 𝐴∩𝑋
⊆ 𝑋.
From this we get that 𝜆𝐴 ∘ 𝜆𝐴+ ≤ 𝑖𝑑𝑃(𝐸) . Similarly, we can show that 𝜆𝐴+ ∘ 𝜆𝐴 ≥ 𝑖𝑑𝑃(𝐸) . Thus by
definition 𝜆𝐴 is residuated with residual 𝜆𝐴+ .

Theorem 1.4.5: If 𝑓: 𝐸 → 𝐹 and 𝑔: 𝐹 → 𝐸 is residuated then;


𝑓 ∘ 𝑓 + ∘ 𝑓 = 𝑓 and𝑓 + ∘ 𝑓 ∘ 𝑓 + = 𝑓 + .

Proof: Since 𝑓: 𝐸 → 𝐹 is residuated, therefore 𝑓 is isotone. Hence 𝑓 ∘ 𝑓 + ∘ 𝑓 ≥ 𝑓 ∘ 𝑖𝑑𝐸 = 𝑓


and 𝑓 ∘ 𝑓 + ∘ 𝑓 ≤ 𝑖𝑑𝐹 ∘ 𝑓 = 𝑓. From which the first equality holds.
Now 𝑓 + ∘ 𝑓 ∘ 𝑓 + ≤ 𝑓 + ∘ 𝑖𝑑𝐹 = 𝑓 + and 𝑓 + ∘ 𝑓 ∘ 𝑓 + ≥ 𝐼𝑑𝐸 ∘ 𝑓 + . So 𝑓 + ∘ 𝑓 ∘ 𝑓 + = 𝑓 + .
Which proves the theorem.
The next result proves that the residual of composition is the composition of residuals.

Theorem 1.4.6: If 𝑓: 𝐸 → 𝐹 and 𝑔: 𝐹 → 𝐺 are residuated mappings then so is 𝑔 ∘ 𝑓: 𝐸 → 𝐺


and (𝑔 ∘ 𝑓)+ = 𝑓 + ∘ 𝑔+ .

Proof: Clearly 𝑔 ∘ 𝑓 and 𝑓 ∘ 𝑔 are isotone. Moreover


(𝑓 + ∘ 𝑔+ ) ∘ (𝑔 ∘ 𝑓) ≥ 𝑓 + ∘ 𝑖𝑑𝐹 ∘ 𝑓 = 𝑓 + ∘ 𝑓 ≥ 𝑖𝑑𝐸 and
(𝑔 ∘ 𝑓) ∘ (𝑓 + ∘ 𝑔+ ) ≤ 𝑔 ∘ 𝑖𝑑𝐹 ∘ 𝑔+ = 𝑔 ∘ 𝑔+ ≤ 𝑖𝑑𝐺 .
Thus 𝑔 ∘ 𝑓 is residuated and by the uniqueness of residuals (𝑔 ∘ 𝑓)+ is 𝑓 + ∘ 𝑔+ .

Corollary 1.4.7: For every ordered set 𝐸 and the set Res 𝑬 of residuated mappings 𝑓: 𝐸 → 𝐸
forms a semigroup, as does the set Res+ 𝑬 of residual mappings 𝑓 + : 𝐸 → 𝐸.

Proof: Clearly if 𝑓, 𝑔 ∈ Res 𝑬, then 𝑓 ∘ 𝑔 ∈ 𝑅𝑒𝑠 𝐸. Let 𝑓, 𝑔, ℎ ∈ Res 𝑬, then (𝑓 ∘ 𝑔) ∘ ℎ = 𝑓 ∘


(𝑔 ∘ ℎ). Thus 𝑅𝑒𝑠 𝐸 is a semigroup.

Example 1.4.8: If 𝑓: 𝐸 → 𝐸 is residuated then 𝑓 = 𝑓 + if and only if 𝑓 2 = 𝑖𝑑𝐸 .

Solution: Suppose that 𝑓: 𝐸 → 𝐸 is residuated and 𝑓 = 𝑓 + .


Since 𝑓 is residuated then there is an isotone map 𝑓 + : 𝐸 → 𝐸 such that 𝑓 ∘ 𝑓 + ≤ 𝑖𝑑𝐸 and 𝑓 + ∘
𝑓 ≥ 𝑖𝑑𝐸 . Since 𝑓 = 𝑓 + then 𝑓 ∘ 𝑓 = 𝑓 + ∘ 𝑓 ≤ 𝑖𝑑𝐸 . Also 𝑓 ∘ 𝑓 = 𝑓 ∘ 𝑓 + ≥ 𝑖𝑑𝐸 , which
implies 𝑓 ∘ 𝑓 = 𝑖𝑑𝐸 . Thus 𝑓 2 = 𝑖𝑑𝐸 .
Conversely suppose that 𝑓 2 = 𝑖𝑑𝐸 that is 𝑓 ∘ 𝑓 = 𝑖𝑑𝐸 then 𝑓 = 𝑖𝑑𝐸 ∘ 𝑓 ≤ 𝑓 + ∘ 𝑓 + ∘ 𝑓. Which
implies 𝑓 ≤ 𝑓 + . Also 𝑓 = 𝑓 ∘ 𝑖𝑑𝐸 ≥ 𝑓 ∘ 𝑓 ∘ 𝑓 + so 𝑓 ≥ 𝑓 + . Thus from above we get 𝑓 = 𝑓 + .
1.5 Isomorphism of Ordered Sets

We now consider the notion of an isomorphism of ordered sets. Clearly whatever are the
properties of the bijection 𝑓: 𝐸 ⟶ 𝐹we require all these properties in order to define an
isomorphism of ordered sets. We certainly also want 𝑓 −1 : 𝐹 ⟶ 𝐸to be an isomorphism. We
note that simply choosing 𝑓 to be an isotone bijection is not enough e.g. if we consider the
ordered sets with the following Hasse diagrams:

Then the Mapping 𝑓: 𝐸 → 𝐹 given by 𝑓(𝑥) =  , 𝑓(𝑦) = γ and 𝑓(𝑧) =  is an isotone


bijection but 𝑓 −1 is not an isotone since  <  and 𝑓 −1 () = 𝑧 || 𝑥 = 𝑓 −1 ().

Definition 1.5.1: By an order isomorphism from an ordered set 𝐸 to an ordered set 𝐹, we mean
an isotone bijection 𝑓: 𝐸 → 𝐹 whose inverse is also isotone.
Note: From the above results we can see that the notion of an order isomorphism is equivalent
to that of a bijection 𝑓 that is residuated whose inverse 𝑓 −1 : 𝐹 → 𝐸 is also isotone (residual of
𝑓). If there is an order isomorphism, 𝑓: 𝐸 → 𝐹 then we say that 𝐸, 𝐹 are (order) isomorphic and
we write it as 𝐸 ≃ 𝐹.

Theorem 1.5.2: Ordered sets𝐸 and 𝐹 are isomorphic if and only if there is surjective mapping
𝑓: 𝐸 → 𝐹 such that
𝑥 ≤ 𝑦 if and only if 𝑓(𝑥) ≤ 𝑓(𝑦).

Proof: Suppose 𝐸 ≃ 𝐹 then by definition 𝑓: 𝐸 → 𝐹 is surjective isotone mapping and 𝑓 −1 is


isotone. Now since𝑓 is isotone therefore for any 𝑥 ≤ 𝑦 we have 𝑓(𝑥) ≤ 𝑓(𝑦)and also as 𝑓 −1
is isotone therefore for 𝑓(𝑥) ≤ 𝑓(𝑦)we have𝑥 ≤ 𝑦. This implies;
𝑥 ≤ 𝑦 if and only if 𝑓(𝑥) ≤ 𝑓(𝑦).
Conversely suppose that such a surjective mapping 𝑓 exists then 𝑓 is also injective, for if
𝑓(𝑥) = 𝑓(𝑦) then from 𝑓(𝑥) ≤ 𝑓(𝑦)we obtain 𝑥 ≤ 𝑦 and from 𝑓(𝑥) ≥ 𝑓(𝑦)we obtain 𝑥 ≥ 𝑦
so 𝑥 = 𝑦. Thus𝑓 is a bijection.
Also by given hypothesis 𝑓 is isotone and so is 𝑓 −1 , since 𝑥 ≤ 𝑦 can be written as 𝑓(𝑓 −1 (𝑥)) ≤
𝑓(𝑓 −1 (𝑦)). Which implies 𝑓 −1 (𝑥) ≤ 𝑓 −1 (𝑦).

Notation: We shall say that 𝐸 and 𝐹 are dually isomorphic if 𝐸 ≃ 𝐹 𝑑 or equivalently 𝐹 ≃ 𝐸 𝑑 .


In the particular case where 𝐸 ≃ 𝐸 𝑑 , we say that 𝐸 is a self-dual.
Example 1.5.3: Let Sub 𝑍 be the set of subgroups of additive abelian group 𝑍 and order Sub 𝑍
by set inclusion. Then (ℕ; |) is dually isomorphic to (Sub 𝑍; ⊆) under the assignment 𝑛 ⟼
𝑛𝑍. In fact, since every subgroup of 𝑍 is of the form 𝑛𝑍 for some 𝑛 ∈ 𝑁, this assignment is
surjective. Also, we have 𝑛𝑍 ⊆ 𝑚𝑍 if and only if 𝑚|𝑛. Note that if we include zero in ℕ, then
0 is the top element of (𝑁; |) and it corresponds to the trivial subgroup {0}. The result therefore
follows by above theorem.
We end this section and thereby this chapter by giving some examples of isomorphism
and dual isomorphism.

Proposition 1.5.4: If 𝐸 and 𝐹 are ordered sets, then under the Cartesian order 𝐸 × 𝐹 ≃ 𝐹 × 𝐸.

Proof: Define 𝑓 ∶ 𝐸 × 𝐹 → 𝐹 × 𝐸 by
𝑓(𝑎, 𝑏) = (𝑏, 𝑎).
𝒇 is surjective:
Let (𝑏, 𝑎) ∈ 𝐹 × 𝐸 then𝑏 ∈ 𝐹 and 𝑎 ∈ 𝐸, so there exists(𝑎, 𝑏) ∈ 𝐸 × 𝐹 such that;
𝑓(𝑎, 𝑏) = (𝑏, 𝑎).
Now let (𝑎, 𝑏), (𝑎′ , 𝑏 ′ ) ∈ 𝐸 × 𝐹 such that; (𝑎, 𝑏) ≤ (𝑎′ , 𝑏 ′ )
if and only if 𝑎 ≤ 𝑎′ in 𝐸 and 𝑏 ≤ 𝑏 ′ in 𝐹
if and only if 𝑏 ≤ 𝑏 ′ in 𝐹 and 𝑎 ≤ 𝑎′ in 𝐸
if and only if (𝑏, 𝑎) ≤ (𝑏 ′ , 𝑎′ ) in 𝐹 × 𝐸
if and only if 𝑓(𝑎, 𝑏) ≤ 𝑓(𝑎′ , 𝑏 ′ ).
Therefore, by (Theorem 1.5.2) 𝐸 × 𝐹 ≃ 𝐹 × 𝐸.

Example 1.5.5: Prove that (𝐸 × 𝐹)𝑑 ≃ 𝐸 𝑑 × 𝐹 𝑑 .

Proof: Define 𝑓: (𝐸 × 𝐹)𝑑 ⟶ 𝐸 𝑑 × 𝐹 𝑑 by


𝑓(𝑥, 𝑦) = (𝑥, 𝑦)
Suppose (𝑥1 , 𝑦1 ) ≤ (𝑥2 , 𝑦2 ) in (𝐸 × 𝐹)𝑑
if and only if (𝑥2 , 𝑦2 ) ≤ (𝑥1 , 𝑦1 ) in 𝐸 × 𝐹
if and only if 𝑥2 ≤ 𝑥1 in 𝐸 and 𝑦2 ≤ 𝑦1 in 𝐹
if and only if 𝑥1 ≤ 𝑥2 in 𝐸 𝑑 and 𝑦1 ≤ 𝑦2 in 𝐹 𝑑
if and only if (𝑥1 , 𝑦1 ) ≤ (𝑥2 , 𝑦2 ) in 𝐸 𝑑 × 𝐹 𝑑 .
So 𝑓 is isotone. Also 𝑓 is clearly surjective. Therefore (𝐸 × 𝐹)𝑑 ≃ 𝐸 𝑑 × 𝐹 𝑑 .

Example 1.5.6: Prove that (𝑃(𝐸) ; ⊆) is self dual.

Proof: We show that the dual of (𝑃(𝐸) ; ⊆ ) is (𝑃(𝐸) ; ⊇ ).


So, we define a map𝑓 ∶ (𝑃(𝐸) ; ⊆) → (𝑃(𝐸); ⊇) by;
𝑓(𝑋) = 𝑋 ′ (where𝑋 ′ is complement of 𝑋).
𝒇 is injective:
Let 𝐴, 𝐵 ∈ (𝑃(𝐸); ⊆) such that 𝑓(𝐴) = 𝑓(𝐵)or𝐴′ = 𝐵 ′ (by definition of 𝑓), which implies
(𝐴′ )′ = (𝐵 ′ )′ . Thus 𝐴 = 𝐵.
𝒇 is surjective:
Let 𝐴´ ∈ (𝑃(𝐸); ⊇) then (𝐴´)´ = 𝐴 ∈ (𝑃(𝐸); ⊆) such that 𝑓((𝐴´)´) = 𝑓(𝐴) = 𝐴´. Thus 𝑓 is
surjective.
𝒇 is isotone:
Suppose 𝐴, 𝐵 ∈ (𝑃(𝐸), ⊆) such that 𝐴 ⊆ 𝐵,we have to show that 𝐴´ ⊇ 𝐵´.
Let 𝑥 ∈ 𝐵´, then 𝑥 ∉ 𝐵, which implies 𝑥 ∉ 𝐴 (as 𝐴 ⊆ 𝐵) or 𝑥 ∈ 𝐴´, so 𝐴´ ⊇ 𝐵´. Thus 𝑓 is
isotone.
𝒇−𝟏 is isotone:
Define 𝑓 −1 : (𝑃(𝐸); ⊇) → (𝑃(𝐸) ; ⊆) by 𝑓 −1 (𝑋 ′ ) = 𝑋.
Suppose 𝐴′, 𝐵´ ∈ (𝑃(𝐸); ⊇) such that 𝐴´ ⊇ 𝐵 ′ . We show that𝐴 ⊆ 𝐵.
Let 𝑧 ∈ 𝐴, then 𝑧 ∉ 𝐴′ which implies 𝑧 ∉ 𝐵 ′ ; this gives 𝑧 ∈ 𝐵. Thus 𝐴 ⊆ 𝐵 showing that 𝑓 −1
is isotone.
Hence (𝑃(𝐸), ⊆ ) ≃ (𝑃(𝐸), ⊇ ).

Example 1.5.7: Let 2 denote the two element chain 0 < 1. Prove that the mapping
𝑓: ℙ(1,2, … , 𝑛) → 𝟐𝑛 given by;
𝑓(𝑥) = (𝑥1 , … , 𝑥𝑛 ),
1 if 𝑖 ∈ 𝑋;
𝑥𝑖 = {
0 otherwise,
where 𝑋 ⊆ ℙ(1,2, … , 𝑛) is an order isomorphism.

Proof: 𝒇 is isotone: Let 𝑋, 𝑌 ∈ 𝑃(1,2, … , 𝑛) and let 𝑓(𝑋) = (𝑥1 , … , 𝑥𝑛 ), 𝑓(𝑌) = (𝑦1 , … , 𝑦𝑛 )
then
𝑋 ⊆ 𝑌 if and only if (for all 𝑖) 𝑖 ∈ 𝑋 implies 𝑖 ∈ 𝑌
if and only if (for all 𝑖) 𝑥𝑖 = 1 implies𝑦𝑖 = 1
if and only if (for all 𝑖) 𝑥𝑖 ≤ 𝑦𝑖
if and only if 𝑓(𝑋) ≤ 𝑓(𝑌) in2𝑛 .
To show 𝑓 is onto take 𝑥 = (𝑥1 , 𝑥2 … , 𝑥𝑛 ) ∈ 2𝑛 , then;
𝑥 = 𝑓(𝑋) where 𝑋 = {𝑖 |𝑥𝑖 = 1}.
So 𝑓 is onto. Therefore 𝑓 is an ordered isomorphism
Chapter 2

Introduction to Lattices
Many important properties of an ordered set 𝑃 are expressed in terms of the existence of certain
upper bounds or lower bounds of 𝑃. Two of the most important classes of ordered sets defined
in this way are lattices and complete lattices. In this chapter we present the basic theory of such
ordered sets and also consider lattice as algebraic structures. We also discuss special type of
lattice called down set lattice and the mappings which preserve the operation of lattices. The
chapter ends with some important results and examples on complete lattices.

2.1 Semilattices and Lattices

In this section we will go through the definition of semilattices and lattices and discuss lattice
as an algebraic structure. The section ends with some of the important results on lattices.
If 𝐸 is an ordered set and 𝑥 ∈ 𝐸 then the canonical embedding of 𝑥↓ into 𝐸, that is the restriction
to 𝑥↓ of the identity mapping on 𝐸 is clearly isotone. For if 𝑖 ∶ 𝑥 ↓ → 𝐸 defined by 𝑖𝑥 (𝑥) = 𝑥,
such that 𝑥 ≤ 𝑦 then 𝑖(𝑥) = 𝑥 ≤ 𝑦 = 𝑖𝑥 (𝑦) which implies 𝑖𝑥 (𝑥) ≤ 𝑖𝑥 (𝑦).
We shall now see when such embedding is residuated. Residuated mappings have important
consequences as for as the structure of 𝐸 is concerned.

Theorem 2.1.1: If 𝐸 is an ordered set then the following are equivalent:


(1) for every 𝑥 ∈ 𝐸 the canonical embedding of 𝑥 ↓ into 𝐸 is residuated;
(2) the intersection of any two principal down-sets is a down-set.

Proof: (1) ⟺ (2) For each 𝑥 ∈ 𝐸, let 𝑖𝑥 : 𝑥 ↓ → 𝐸 be the canonical embedding. Then by the
definition of a residuated mapping (1) holds if and only if for all 𝑥, 𝑦 ∈ 𝐸 there exists 𝛼 =
max{𝑧 ∈ 𝑥 ↓ : 𝑧 = 𝑖(𝑧) ≤ 𝑦}. We claim that this is equivalent to the existence of 𝛼 ∈ 𝐸 such
that 𝑥 ↓ ∩ 𝑦 ↓ = 𝛼 ↓ . Now clearly 𝛼 ⊆ 𝑥 ↓ ⊆ 𝐸 so 𝑥 ∈ 𝐸. Let 𝑧 ∈ 𝛼 ↓ then we have 𝑧 ≤ 𝑥 and
𝑧 ≤ 𝑦 this implies 𝑧 ∈ 𝑥 ↓ and 𝑧 ∈ 𝑦 ↓ this gives 𝑧 ∈ 𝑥 ↓ ∩ 𝑦 ↓ which implies 𝛼 ↓ ⊆ 𝑥 ↓ ∩
𝑦 ↓ . Secondly we suppose that 𝑘 ∈ 𝑥 ↓ ∩ 𝑦 ↓ implies 𝑘 ∈ 𝑥 ↓ and 𝑘 ∈ 𝑦 ↓ or 𝑘 ≤ 𝑥 and 𝑘 ≤ 𝑦;
this gives 𝑘 ≤{ z ∈𝑥 ↓ : z ≤𝑦}; this implies 𝑘 ≤ 𝛼 or 𝑘 ∈𝛼 ↓ . Thus 𝑥 ↓ ∩ 𝑦 ↓ = 𝛼 ↓ , which is (2).

Definition 2.1.2: If 𝐸 satisfies either of the equivalent conditions of the above theorem then we
shall denote by 𝑥 ∧ 𝑦 the element 𝛼 such that 𝑥 ↓ ∩ 𝑦 ↓ = 𝛼 ↓ and call 𝑥 ∧ 𝑦 the meet of 𝑥 and
𝑦. In this situation we shall say that 𝐸 is a meet semilattice.

We can of course develop the duals of the above, obtaining in this way the duals, the notion
of a join semilattice which is characterized by the intersection of any two principal upsets
being a principal upsets, the element 𝛽 such that 𝑥 ↑ ∩ 𝑦 ↑ = 𝛽 ↑ being denoted by 𝑥 ∨ 𝑦 and
called the join of 𝑥 and 𝑦.
Definition 2.1.3: The minimum of a subset 𝑆 of a partially ordered set (𝐸; ≤) is an element of
𝑆 which is less than or equal to any other element of 𝑆.
Proposition 2.1.4: Ever𝑦 chain is a meet semilattice in which 𝑥 ∧ 𝑦 = 𝑚𝑖𝑛 {𝑥, 𝑦}.

proof: Let 𝐶 be any chain and let 𝑥, 𝑦 ∈ 𝐶 then either 𝑥 ≤ 𝑦 or 𝑦 ≤ 𝑥. Without loss of
generality suppose that 𝑥 ≤ 𝑦; then 𝑚𝑖𝑛 {𝑥, 𝑦} = 𝑥. Now 𝑥 ≤ 𝑦 imples 𝑥 ∈ 𝑦 ↓ . Clearly 𝑥 ∈
𝑥 ↓ so 𝑥 ∈ 𝑥 ↓ ∩ 𝑦 ↓ ; thus 𝑥 ↓ ⊆ 𝑥 ↓ ∩ 𝑦 ↓ . Also 𝑥 ↓ ∩ 𝑦 ↓ ⊆ 𝑥 ↓ . So 𝑥 ↓ ∩ 𝑦 ↓ = 𝑥 ↓ . Thus by
definition of meet 𝑥 ∧ 𝑦 = 𝑥 = 𝑚𝑖𝑛 {𝑥, 𝑦}.

Example 2.1.5: (ℕ; |) is a meet semilattice in which 𝑚 ∧ 𝑛 = ℎ𝑐𝑓 {𝑚, 𝑛}.

Solution: Let 𝐶 be any chain in 𝑁 and 𝑥, 𝑦 ∈ ℕ then either 𝑥 divides 𝑦 or 𝑦 divides 𝑥. If 𝑥


divides y then ℎ𝑐𝑓{𝑥, 𝑦} = 𝑥. Clearly 𝑥 ↓ ∩ 𝑦 ↓ ⊆ 𝑥 ↓ . Now 𝑥 divides 𝑦 implies 𝑥 ∈ 𝑦 ↓ ; which
implies 𝑥 ↓ ⊆ 𝑦 ↓. Also 𝑥 ∈ 𝑥 ↓ always; thus 𝑥 ↓ ∩ 𝑦 ↓ ⊆ 𝑥 ↓ ; so 𝑥 ↓ = 𝑥 ↓ ∩ 𝑦 ↓ . Thus by definition;
𝑥 ∧ 𝑦 = 𝑥 = ℎ𝑐𝑓{𝑥, 𝑦}.
Meet-semilattices and join-semilattices can also be characterized in a purely algebraic way
which we shall now describe.

Proposition 2.1.6: The meet semilattice ( 𝐸 ; ∧ ) is a commutative idempotent semigroup.

Proof : Define the composition on ( 𝐸;∧ ) by 𝑥 ∙ 𝑦 = 𝑥 ∧ 𝑦.


Since 𝑥 ↓ ∩ (𝑦 ↓ ∩ 𝑧 ↓ ) = (𝑥 ↓ ∩ 𝑦 ↓ ) ∩ 𝑧 ↓ ; which implies 𝑥 ∧ (𝑦 ∧ 𝑧 ) = (𝑥 ∧ 𝑦) ∨ 𝑧; this
gives 𝑥 ∙ (𝑦 ∙ 𝑧) = (𝑥 ∙ 𝑦) ∙ 𝑧. Therefore ∧ is associative.
Also 𝑥 ↓ ∩ 𝑦 ↓ = 𝑦 ↓ ∩ 𝑥 ↓ ; this implies 𝑥 ∧ 𝑦 = 𝑦 ∧ 𝑥; this implies𝑥 ∙ 𝑦 = 𝑦 ∙ 𝑥 showing that ∧
is commutative.
Since 𝑥 ↓ ∩ 𝑥 ↓ = 𝑥 ↓ ; this implies 𝑥 ∧ 𝑥 = 𝑥; implies 𝑥 ∙ 𝑥 = 𝑥; which implies 𝑥 2 = 𝑥. So that
the operation of ∧ is idempotent.
Thus a system with a single binary, idempotent, commutative and associative operations with
respect to meet is called a meet semilattice.

Proposition 2.1.7: The join semilattice (𝐸; ∨) is a commutative idempotent semigroup.

Proof: The proof follows by applying principal of duality to (Proposition 2.1.6).

The following result shows that the converse of (Propositions 2.1.6 and 2.1.7) holds, that is
every commutative, idempotent semigroup gives rise to meet semilattice and join semilattice.

Theorem 2.1.8: Every commutative idempotent semigroup can be ordered in such a way that
it forms a meet semilattice.

Proof: Suppose that 𝐸 is a commutative idempotent semigroup in which we shall denote the
law of composition by juxtaposition. Define a relation ≤ on 𝐸 by 𝑥 ≤ 𝑦 if and only if 𝑥𝑦 =
𝑥. We first show that ≤ is an order.
Reflexivity: Since 𝐸 is idempotent therefore we have 𝑥 2 = 𝑥 for every 𝑥 ∈ 𝐸 this implies
that 𝑥𝑥 = 𝑥 for all 𝑥 ∈ 𝐸; which implies 𝑥 ≤ 𝑥. So that ≤ is reflexive.
Antisymmetry: Let 𝑥 ≤ 𝑦 and 𝑦 ≤ 𝑥 for all 𝑥, 𝑦 ∈ 𝐸 then by commutativity of 𝐸 we have
𝑥 = 𝑥𝑦 = 𝑦𝑥 = 𝑦; thus 𝑥 = 𝑦. Hence ≤ is antisymmetric.
Transitivity: Let 𝑥 ≤ 𝑦 and 𝑦 ≤ 𝑧 then by definition 𝑥 = 𝑥𝑦 and 𝑦 = 𝑦𝑧; which implies
𝑥 = 𝑥𝑦 = 𝑥𝑦𝑧 = 𝑥𝑧 and therefore we get 𝑥 ≤ 𝑧, so that 𝑅 is transitive.
Now if 𝑥, 𝑦 ∈ 𝐸 then we have 𝑥𝑦 = 𝑥𝑥𝑦 = 𝑥𝑦𝑥 and so 𝑥𝑦 ≤ 𝑥. Inverting the roles of 𝑥, 𝑦
we also have 𝑦𝑥 = 𝑦𝑦𝑥 = 𝑦𝑥𝑦 implies 𝑥𝑦 ≤ 𝑦 therefore 𝑥𝑦 ∈ 𝑥 ↓ ∩ 𝑦 ↓ . We now suppose
that 𝑧 ∈ 𝑥 ↓ ∩ 𝑦 ↓ . Then 𝑧 ≤ 𝑥 and 𝑧 ≤ 𝑦, this implies 𝑧 = 𝑧𝑥 and 𝑧 = 𝑧 = 𝑧𝑦 = 𝑧𝑥𝑦;
and therefore 𝑧 ≤ 𝑥. Which implies that 𝑥𝑦 is the top element of 𝑥 ↓ ∩ 𝑦 ↓ . Thus 𝐸 is a meet
semilattice in which 𝑥 ∧ 𝑦 = 𝑥𝑦.

Theorem 2.1.9: Every commutative idempotent semigroup can be ordered in such a way that
it forms a join semilattice.

Proof: Suppose that 𝐸 is a commutative idempotent semigroup in which we denote the law of
composition by juxtaposition. Define a relation ≥ on 𝐸 by 𝑥 ≥ 𝑦 if and only if 𝑥𝑦 = 𝑦. Then
by above theorem ≥ is an order. If 𝑥, 𝑦 ∈ 𝐸, then we have 𝑥𝑦 = 𝑥𝑦𝑦 = 𝑦𝑥𝑦; so 𝑦 ≥ 𝑥𝑦. Now
inverting the roles of 𝑥 and 𝑦 we also have 𝑥 ≥ 𝑥𝑦. Therefore 𝑥𝑦 ∈ 𝑥 ↑ ∩ 𝑦 ↑ . Suppose now that
𝑧 ∈ 𝑥 ↑ ∩ 𝑦 ↑ then 𝑧 ≥ 𝑥 and 𝑧 ≥ 𝑦; implies 𝑧 = 𝑧𝑥 and 𝑧 = 𝑧𝑦. So 𝑧 = 𝑧𝑦 = 𝑧𝑥𝑦 and
therefore 𝑧 ≥ 𝑥𝑦; this implies 𝑥𝑦 = 𝑦 = 𝑠𝑢𝑝{𝑥, 𝑦}. Thus 𝐸 is a join semilattice in which 𝑥 ∨
𝑦 = 𝑥𝑦 = 𝑦.

Example 2.1.10: If 𝑃 and 𝑄 are meet semilattices then the set of isotone mappings from 𝑃 to 𝑄
forms the meet semilattice with respect to the order defined by;
𝑓 ≤ 𝑔 if and only if 𝑓(𝑥) ≤ 𝑔(𝑥).

Solution: Let 𝑀𝑎𝑝(𝑃, 𝑄) denotes set of isotone mappings from 𝑃 to 𝑄. Since 𝑀𝑎𝑝(𝑃, 𝑄) is an
ordered set therefore we prove that for every 𝑓 ∈ 𝑀𝑎𝑝(𝑃, 𝑄) the canonical embedding of 𝑓 ↓
into 𝑀𝑎𝑝(𝑃, 𝑄) is residuated. Now the canonical embedding of 𝑓 ↓ into 𝑀𝑎𝑝(𝑃, 𝑄) is
𝑖𝑓 : 𝑓 ↓ → 𝑀𝑎𝑝(𝑃, 𝑄). (1)
If 𝑔 ∈ 𝑓 ↓ then 𝑔 ≤ 𝑓 and 𝑖𝑓 (𝑔) = 𝑔 where 𝑔 ∈ 𝑀𝑎𝑝(𝑃, 𝑄). we claim that the map given in
(1) is residuated. In other words we show that inverse image of every principal down of 𝑓 ↓ is
principal down set. Let 𝑖𝑓← (𝑔↓ ) = 𝑘 ↓ where 𝑔, 𝑘 ∈ 𝑀𝑎𝑝(𝑃, 𝑄). Let 𝛼 ∈ 𝑖𝑓← (𝑔↓ ) be any function
in 𝑀𝑎𝑝(𝑃, 𝑄) then 𝑖𝑓 (𝛼) ∈ 𝑔↓ implies 𝛼 ∈ 𝑔↓ ; therefore 𝑖𝑓← (𝑔↓ ) ⊆ 𝑔↓ .
Now let 𝑞 ∈ 𝑔↓ then 𝑞 ≤ 𝑔 implies 𝑖𝑓 (𝑞) ∈ 𝑔↓ ; which implies 𝑔 ∈ 𝑖𝑓← (𝑔↓ ). Thus 𝑔 ⊆ 𝑖𝑓← (𝑔↓ ).
therefore, 𝑖𝑓← (𝑔↓ ) = 𝑔↓ . Therefore we get 𝑔↓ = 𝑘 ↓ , as required.

Definition 2.1.11: If 𝐸 is an ordered set and 𝐹 is a subset of 𝐸 then 𝑥 ∈ 𝐸 is said to be lower


bound of 𝐹 if for all 𝑦 ∈ 𝐹, 𝑥 ≤ 𝑦 and an upper bound of 𝐹 if for all 𝑦 ∈ 𝐹 , 𝑦 ≤ 𝑥. In what
follows we shall denote the set of lower bounds of 𝐹 in 𝐸 by 𝐹 ↓ , and the set of upper bounds
of 𝐹 in 𝐸 by 𝐹 ↑ .
Remark 2.1.12:

(i) We note here that the notation 𝐹 ↓ is used to denote the down-set generated by 𝐹, namely
{𝑥 ∈ 𝐸 | there exists 𝑎 ∈ 𝐹 with 𝑥 ≤ 𝑎} and 𝐹 ↑ to denote the upset generated by 𝐹. In
particular we have {𝑥}↓ = 𝑥 ↓ and {𝑥}↑ = 𝑥 ↑ .

(ii) Note that since 𝐹 ↓ and 𝐹 ↑ denote the set of lower bounds and upper bounds respectively,
so we conclude that these sets may be empty only if they are unbounded. But not so when 𝐸 is
bounded because, in that case it has both a top element 1 and the bottom element 0. If 𝐸 has a
top element 1 then since 1 ≥ 𝑥 for every 𝑥 ∈ 𝐸; therefore 𝐸 ↑ = {1} otherwise 𝐸↑ =∅. Similarly
if 𝐸 has a bottom element 0 then 𝐸 ↓ = {0}, otherwise 𝐸 ↓ = ∅.

(iii) Note that if 𝐹 = ∅, then since ∅ is subset of every set so every 𝑥 ∈ 𝐸 satisfies the relation
𝑦 ≤ 𝑥 for every 𝑦 ∈ 𝐹. Thus ∅↑ = 𝐸 and similarly ∅↓ = 𝐸.

Definition 2.1.13: If 𝐸 is an ordered set and 𝐹 is a subset of 𝐸 then by the infimum or greatest
lower bound of 𝐹 we mean the top element when such exists of the set 𝐹 ↓ of lower bounds of
𝐹. We denote this by 𝑖𝑛𝑓𝐸 𝐹 or simply 𝑖𝑛𝑓𝐹, if there is no confusion.

Since ∅↓ = 𝐸, we see that 𝑖𝑛𝑓𝐸 𝐹 exists if and only if 𝐸 has a top element 1, in which case
𝑖𝑛𝑓 𝐸𝐹= 1. It is immediate from what has gone before that a meet semilattice can be described
as an ordered set in which every pair of elements 𝑥, 𝑦 has a greatest lower bound; here we have
𝑖𝑛𝑓{𝑥, 𝑦} = 𝑥 ∧ 𝑦. A simple inductive argument shows that for every finite subset
{𝑥1 , 𝑥2 , . . . , 𝑥𝑛 } of a meet semilattice we have that 𝑖𝑛𝑓{𝑥1 , 𝑥2 , … , 𝑥𝑛 } exists and is 𝑥1 ∧ 𝑥2 ∧
… ∧ 𝑥𝑛 .

Definition 2.1.14: let 𝐸 be an ordered set and 𝐹 be a subset of 𝐸, an element 𝑚 ∈ 𝐹 is called


least upper bound of 𝐹 or 𝑠𝑢𝑝(𝐹) if 𝑚 is an upper bound of 𝐹 and 𝑚 ≤ 𝑛, whenever 𝑛 is an
upper bound of 𝐹.
On many posets it is possible to define binary operation by using the greatest lower bound and
the least upper bound of two elements.

Definition 2.1.15: A lattice is an ordered set (𝐸, ≤) which with respect to its order is both a
meet semilattice and join semilattice.
Thus the lattice is an ordered set in which every pair of elements and hence every finite subset
has an infimum and a supremum. We often denote the lattice by (𝐸; ∧ ; ∨ ; ≤) .

Remarks 2.1.16:

(1) Let 𝐸 be an ordered set. If 𝑥, 𝑦 ∈ 𝐸 and 𝑥 ≤ 𝑦 then (𝑥, 𝑦)𝑢 = 𝑦 ↑ and (𝑥, 𝑦)𝑙 = 𝑥 ↓ (where
(𝑥, 𝑦)𝑢 denotes upper bound of {𝑥,𝑦} and (𝑥, 𝑦)𝑙 denotes lower bound of {𝑥, 𝑦}) and since the
least element o𝑓 𝑦 ↑ 𝑖𝑠 𝑦 and the greatest element of 𝑥 ↓ is 𝑥. We have 𝑥 ∨ 𝑦 = 𝑦 and 𝑥 ∧ 𝑦 = 𝑥
whenever 𝑥 ≤ 𝑦. In particular since “≤” is reflexive, we have 𝑥 ∨ 𝑥 = 𝑥 and 𝑥 ∧ 𝑥 = 𝑥.
(2) In an ordered set 𝐸, the 𝑙𝑢𝑏 of (𝑥, 𝑦) may fail to exist for two different reasons:
(a) because 𝑥 and 𝑦 have no common upper bounds,
(b) because they have no least upper bound.
For example, consider the following Hasse diagrams:

Here (𝑎, 𝑏)𝑢 = ∅ and hence 𝑎 ∨ 𝑏 does not exist.

Here (𝑎, 𝑏)𝑢 = (𝑐, 𝑑) and thus 𝑎 ∨ 𝑏 does not exist as (𝑎, 𝑏)𝑢 has no least element.
(3) Let 𝐿 be a lattice then for all 𝑎, 𝑏, 𝑐, 𝑑 ∈ 𝐿,
(i) 𝑎 ≤ 𝑏 implies 𝑎 ∨ 𝑐 ≤ 𝑏 ∨ 𝑐 and 𝑎 ∧ 𝑐 ≤ 𝑏 ∧ 𝑐;
(ii) 𝑎 ≤ 𝑏 and 𝑐 ≤ 𝑑 implies 𝑎 ∨ 𝑐 ≤ 𝑏 ∨ 𝑑 and 𝑎 ∧ 𝑐 ≤ 𝑏 ∧ 𝑑.
(4) Let 𝐿 be a lattice, let 𝑎, 𝑏, 𝑐 ∈ 𝐿 and assume 𝑏 ≤ 𝑎 ≤ 𝑏 ∨ 𝑐,since𝑐 ≤ 𝑏 ∨ 𝑐 we have
(𝑏 ∨ 𝑐) ∨ 𝑐 = 𝑏 ∨ 𝑐 (by 1). Thus 𝑏 ∨ 𝑐 ≤ 𝑎 ∨ 𝑐 ≤ (𝑏 ∨ 𝑐) ∨ 𝑐 = 𝑏 ∨ 𝑐, whenever 𝑎 ∨ 𝑐 = 𝑏 ∨
𝑐.

Lemma 2.1.17: (Connecting lemma) Let 𝐿 be a lattice and 𝑎, 𝑏 ∈ 𝐿. Then the following are
equivalent:
(i) 𝑎 ≤ 𝑏;
(ii) 𝑎 ∨ 𝑏 = 𝑏;
(iii) 𝑎 ∧ 𝑏 = 𝑎.

Proof: We have already shown in above remark that (i) implies both (ii) and (iii). Now assume
(ii) holds, then 𝑏 is an upper bound for (𝑎, 𝑏) whenever 𝑏 ≥ 𝑎. Thus (i) holds. Likewise, we
can show that (iii) implies (i).

Theorem 2.1.18: A set 𝐸 can be given the structure of lattice if and only if it can be endowed
with two laws of composition (𝑥, 𝑦) ↦ 𝑥 ⋒ 𝑦 and (𝑥, 𝑦) ↦ 𝑥 ⋓ 𝑦 such that

(1) (𝐸 ;⋒) and (𝐸 ;⋓) are commutative semigroups.


(2) The following absorption law holds:
for all 𝑥, 𝑦 ∈ 𝐸𝑥 ⋒ (𝑥 ⋓ 𝑦) = 𝑥 = 𝑥 ⋓ (𝑥 ⋒ 𝑦).

Proof: Suppose that 𝐸 is a lattice then by definition ( 𝐸; ≤ ) is both a meet semilattice and a
join semilattice. That is, it has two laws of composition that satisfy (1) namely (𝑥, 𝑦) ↦ 𝑥 ∧
𝑦 and(𝑥, 𝑦) ↦ 𝑥 ∨ 𝑦. To show that (2) holds we have;
𝑥 ≤ sup {𝑥, 𝑦} = 𝑥 ∨ 𝑦 so by connecting lemma 𝑥 ∧ (𝑥 ∨ 𝑦) = 𝑖𝑛𝑓 { 𝑥 , 𝑥 ∨ 𝑦} = 𝑥. Also
𝑥 ≥ 𝑖𝑛𝑓 {𝑥, 𝑦} = 𝑥 ∧ 𝑦 and so by connecting lemma 𝑥 ∨ (𝑥 ∧ 𝑦) = 𝑥. Thus 𝑥 ∧ (𝑥 ∨ 𝑦) = 𝑥 ∨
(𝑥 ∧ 𝑦) = 𝑥. Which proves (2).
Suppose now that 𝐸 has two laws of compositions ⋒ and ⋓ that satisfy (1) and (2). Using (2)
we have 𝑥 ⋓ 𝑥 = 𝑥 ⋓ [𝑥 ⋒ (𝑥 ⋓ 𝑥 )]; again using (2) we have 𝑥 ⋓ [𝑥 ⋒ (𝑥 ⋓ 𝑥 )] = 𝑥.
Thus 𝑥 ⋓ 𝑥 = 𝑥 and similarly 𝑥 ⋒ 𝑥 = 𝑥 . Which shows that 𝐸 is idempotent. Thus 𝐸 is a
commutative idempotent semigroup, so by (Theorem 2.1.6 and Theorem 2.1.7) (𝐸 ; ⋒) and
( 𝐸 ; ⋓) are semilattices. In order to show that is (𝐸 ; ⋓ ,⋒) is a lattice with ⋒ 𝑎𝑠 ∧ and ⋓ as
∨, we must show that the order defined by above compositions must coincide. In other words
we must show that 𝑥 ⋒ 𝑦 = 𝑥 is equivalent to 𝑥 ⋓ 𝑦 = 𝑦. Now if 𝑥 ⋓ 𝑦 = 𝑦 then by using
absorption law we have 𝑥 = 𝑥 ⋒ (𝑥 ⋓ 𝑦 ) = 𝑥 ⋒ 𝑦; which implies 𝑥 ⋒ 𝑦 = 𝑥 and if 𝑥 ⋒
𝑦 = 𝑥 then using the absorption law we have 𝑦 = ( 𝑥 ⋒ 𝑦 ) ⋓ 𝑦 = 𝑥 ⋓ 𝑦. Thus we see that 𝐸
is a lattice in which 𝑥 ≤ 𝑦 is described equivalently by 𝑥 ⋒ 𝑦 = 𝑥 or by 𝑥 ⋓ 𝑦 = 𝑦.

Lemma 2.1.17: Every chain is a lattice.

Proof: To prove that every chain ( 𝑃; ≤) is a lattice, fix some 𝑎, 𝑏 ∈ 𝑃 and without loss of
generality assume that 𝑎 ≤ 𝑏. From reflexivity of " ≤ ", we have 𝑎 ≤ 𝑎; hence ‘𝑎’ is the lower
bound of the set { 𝑎 , 𝑏}. To prove that it is the greatest lower bound note that if some 𝑐 ∈ 𝑃
is another lower bound of { 𝑎, 𝑏 } then 𝑐 ≤ 𝑎 . It means 𝑎 = 𝑖𝑛𝑓 {𝑎 , 𝑏}. Now we show that
𝑏 = 𝑠𝑢𝑝 { 𝑎 , 𝑏 }. From reflexivity of “≤” we have 𝑏 ≤ 𝑏; also by our assumption 𝑎 ≤ 𝑏.
Hence 𝑏 is the upper bound of the set {𝑎 , 𝑏}. To prove that it is the least upper bound, let 𝑘 ∈
𝑃 is another upper bound of { 𝑎, 𝑏} then by definition of supremum 𝑏 ≤ 𝑘 and therefore
𝑠𝑢𝑝{𝑎, 𝑏} = 𝑏. This shows that ( 𝑃; ≤) is a lattice.

Definition 2.1.18: A lattice 𝐿 is said to be a bounded lattice if it has both an upper bound
denoted by 1 and a lower bound denoted by 0.

Example 2.1.19: For every set 𝐸, ( ℙ (𝐸 );∩,∪, ⊆) is a bounded lattice.

Solution: For any two elements 𝑆 and 𝑇 in ℙ(𝐸) we have 𝑆 ⊆ 𝑆 ∪ 𝑇 and 𝑇 ⊆ 𝑆 ∪ 𝑇. Thus 𝑆 ∪
𝑇 is the upper bound of 𝑆 and 𝑇. Now if 𝑅 is any other upper bound then 𝑆 ⊆ 𝑅 and 𝑇 ⊆ 𝑅
this gives S ∪ 𝑇 ⊆ 𝑅. So that sup{𝑆, 𝑇} = 𝑆 ∪ 𝑇. Also 𝑆 ∩ 𝑇 ⊆ 𝑆 and 𝑆 ∩ 𝑇 ⊆ 𝑇. Thus 𝑆 ∩ 𝑇
is the lower bound of 𝑆 and 𝑇. Now if 𝐿 is any other lower bound of 𝑆 and 𝑇 then 𝐿 ⊆ 𝑆 and 𝐿 ⊆
𝑇, this gives 𝐿 ⊆ 𝑆 ∩ 𝑇. Thus 𝑖𝑛𝑓{𝑆, 𝑇} = 𝑆 ∩ 𝑇.
Since ∅ ⊆ S for every 𝑆 ∈ ℙ( 𝐸 ) so ∅ is the lower bound of ℙ(𝐸). Also 𝑆 ⊆ 𝐸 for every 𝑆 ∈
ℙ (𝐸) so 𝐸 is the upper bound, therefore ℙ(𝐸) is bounded lattice.

Example 2.1.20: For every infinite set 𝐸, let ℙ𝑓(𝐸 ) be the set of finite subsets of 𝐸 then
𝑃𝑓 (𝐸 ) is a lattice with no top element.

Solution: Since 𝐸 is infinite set so 𝐸 ∉ ℙ𝑓 (𝐸). Let 𝐴, 𝐵 ∈ ℙ𝑓 (𝐸) then 𝐴 ⊆ 𝐴 ∪ 𝐵 and 𝐵 ⊆ 𝐴 ∪


𝐵. Thus 𝐴 ∪ 𝐵 is upper bound of 𝐴 and 𝐵. Now if 𝑅 is any other upper bound then 𝐴 ⊆ 𝑅 and
𝐵 ⊆ 𝑅 this gives 𝐴 ∪ 𝐵 ⊆ 𝑅. So that sup {𝐴, 𝐵} = 𝐴 ∪ 𝐵. Similarly we can show that
𝑖𝑛𝑓 {𝐴, 𝐵} = 𝐴 ∩ 𝐵. To show that ℙ𝑓 (𝐸) has no top element, suppose to the contrary that “𝑍”
is the top element of ℙ𝑓 (𝐸) then for all 𝑋 ∈ ℙ𝑓 (𝐸), 𝑋 ⊆ 𝑍. Since 𝑋 and 𝑍 are both finite
implies 𝑋 ∪ 𝑍 is also finite. By our hypothesis, we must have 𝑋 ∪ 𝑍 ⊆ 𝑍, which is not
possible. Thus ℙ𝑓 (𝐸) has no top element.

Example 2.1.21:(ℕ ∪ {0} ; ∣ ) is a bounded lattice.

Solution: Let 𝑚, 𝑛 ∈ ℕ; then 𝑚 ∣ 𝑙𝑐𝑚 ( 𝑚, 𝑛 ) and 𝑛 ∣ 𝑙𝑐𝑚(𝑚 , 𝑛 ) gives 𝑙𝑐𝑚( 𝑚, 𝑛) is the


upper bound of {𝑚 , 𝑛} . If 𝑘 is any upper bound of {𝑚 , 𝑛 }; then 𝑚 ∣ 𝑘 and 𝑛 ∣ 𝑘 implies
𝑙𝑐𝑚(𝑚, 𝑛) ∣ 𝑘. Therefore by definition 𝑠𝑢𝑝( 𝑚 , 𝑛 ) = 𝑙𝑐𝑚 ( 𝑚, 𝑛). In a similar manner we
can show that 𝑖𝑛𝑓 (𝑚, 𝑛) = ℎ𝑐𝑓 ( 𝑚 , 𝑛 ). Also since 1 divides every natural number so 1 is
the bottom element. Also every natural number divides 0; so 0 is the top element thus ℕ ∪ {0}
is bounded. This proves that ( ℕ ∪ {0}; ∣ ) is a bounded lattice.

Example 2.1.22: If 𝑉 is a vector space and 𝑆𝑢𝑏𝑉 denotes the set of subspaces of 𝑉 then
( 𝑆𝑢𝑏𝑉 ; ⊆ ) forms a lattice with 𝑖𝑛𝑓 {𝐴, 𝐵} = 𝐴 ∩ 𝐵 and 𝑠𝑢𝑝 { 𝐴 , 𝐵 } = 𝐴 + 𝐵 = {𝑎 +
𝑏 |𝑎 ∈ 𝐴 and 𝑏 ∈ 𝐵}.

Solution: Suppose 𝐴, 𝐵 ∈ 𝑆𝑢𝑏𝑉 then 𝐴 ∩ 𝐵 ∈ 𝑆𝑢𝑏𝑉. Also 𝐴 ∩ 𝐵 ⊆ 𝐴 and 𝐴 ∩ 𝐵 ⊆ 𝐵;


therefore 𝐴 ∩ 𝐵 is the upper bound of 𝐴 and 𝐵. Let 𝑀 be any other upper bound of 𝐴 and 𝐵
then 𝐴 ⊆ 𝑀 and 𝐵 ⊆ 𝑀; which implies 𝐴 ∩ 𝐵 ⊆ 𝑀. Therefore 𝑖𝑛𝑓 {𝐴, 𝐵} = 𝐴 ∩ 𝐵. Also
since 𝐴 ⊆ 𝐴 + 𝐵 and 𝐵 ⊆ 𝐴 + 𝐵, this implies 𝐴 + 𝐵 is an upper bound of 𝐴 and 𝐵. Let 𝑀 be
any other upper bound of 𝐴 and 𝐵 then 𝐴 ⊆ 𝑀 and 𝐵 ⊆ 𝑀; this gives 𝐴 + 𝐵 =
{ 𝑎 + 𝑏 ∣ 𝑎 ∈ 𝐴, 𝑏 ∈ 𝐵 } ⊆ 𝑀. Thus 𝑠𝑢𝑝 { 𝐴 , 𝐵 } = 𝐴 + 𝐵 = { 𝑎 + 𝑏 ∣ 𝑎 ∈ 𝐴, 𝑏 ∈ 𝐵 }.
Hence ( 𝑆𝑢𝑏 𝑉; ∩, +, ⊆ ) is a lattice. 𝑆𝑢𝑏𝑉 is also bounded with bottom element {0}, the zero
space and top element as 𝑉.
Example 2.1.23: If 𝐿, 𝑀 are lattices then the set of isotone mappings 𝑓 ∶ 𝐿 → 𝑀 form a
lattice in which 𝑓 ∧ 𝑔 and 𝑓 ∨ 𝑔 are given by the prescriptions :
(𝑓 ∧ 𝑔 ) (𝑥 ) = 𝑓 (𝑥) ∧ 𝑔 (𝑥) , ( 𝑓 ∨ 𝑔 ) (𝑥) = 𝑓 (𝑥 ) ∨ 𝑔(𝑥) .
Sol: Let 𝑆 = { 𝑓 ∶ 𝐿 → 𝑀 ∣for all 𝑥, 𝑦 ∈ 𝐿 , 𝑥 ≤ 𝑦 implies 𝑓(𝑥) ≤ 𝑓(𝑦) }. We have to
show that S is a lattice. Suppose that 𝑓 , 𝑔 ∈ 𝑆; we claim that ( 𝑓 ∧ 𝑔) (𝑥) = 𝑓(𝑥) ∧ 𝑔(𝑥) .
Let ℎ = 𝑓 ∧ 𝑔 then ℎ ≤ 𝑓 and ℎ ≤ 𝑔 and for any 𝑙 ∈ S if 𝑙 ≤ 𝑓 and 𝑙 ≤ 𝑔 then 𝑙 ≤ ℎ;
implies ℎ(𝑥) ≤ 𝑓(𝑥) and ℎ(𝑥) ≤ 𝑔(𝑥) for all 𝑥 ∈ 𝐿. If 𝑙(𝑥) ≤ 𝑓(𝑥) and 𝑙(𝑥) ≤ 𝑔(𝑥)
then 𝑙(𝑥) ≤ ℎ(𝑥) for all 𝑥 ∈ 𝐿; since 𝑓(𝑥) ∧ 𝑔(𝑥) ≤ 𝑓(𝑥) and 𝑓(𝑥) ∧ 𝑔(𝑥) ≤ 𝑔(𝑥) for
all 𝑥 ∈ 𝐿; implies 𝑓(𝑥) ∧ 𝑔(𝑥) ≤ ℎ(𝑥) for all𝑥 ∈ 𝐿. Also ℎ(𝑥) ≤ 𝑓(𝑥) ∧ 𝑔(𝑥) for every
𝑥 ∈ 𝐿; this implies ℎ(𝑥) = 𝑓(𝑥) ∧ 𝑔(𝑥); which implies (𝑓 ∧ 𝑔)(𝑥) = 𝑓(𝑥) ∧ 𝑔(𝑥) as
required. Likewise we can show that (𝑓 ∨ 𝑔) (𝑥) = 𝑓(𝑥) ∨ 𝑔(𝑥).
Proposition 2.1.24: The set 𝑁(𝐺) of normal subgroups of a group 𝐺 forms a lattice in which
𝑠𝑢𝑝 { 𝐻 , 𝐾 } = {ℎ𝑘 ∣ ℎ ∈ 𝐻 and 𝑘 ∈ 𝐾} and 𝑖𝑛𝑓{𝐻, 𝐾} = 𝐻 ∩ 𝐾, where 𝐻, 𝐾 ∈ 𝑁(𝐺).

Proof : Let 𝐻, 𝐾 ∈ 𝑁 (𝐺), then clearly (𝐻 ∩ 𝐾) ∈ 𝑁(𝐺). Now 𝐻 ∩ 𝐾 ⊆ 𝐻 and 𝐻 ∩ 𝐾 ⊆ 𝐾


so that 𝐻 ∩ 𝐾 is the lower bound of 𝐻 and 𝐾. If 𝑊 is any other lower bound of 𝐻 and 𝐾 then
𝑊 ⊆ 𝐻 and 𝑊 ⊆ 𝐾; which implies 𝑊 ⊆ 𝐻 ∩ 𝐾. Thus inf {𝐻, 𝐾} = 𝐻 ∩ 𝐾. To prove that
supremum also exists, suppose that 𝐻 and 𝐾 are two normal subgroups of a group 𝐺. We claim
that 𝐻𝐾 = { ℎ𝑘 ∣ ℎ ∈ 𝐻 and 𝑘 ∈ 𝐾 }is also a normal subgroup of 𝐺. Let 𝑔 ∈ 𝐺 and 𝑥 ∈
𝐻𝐾 then 𝑥 = ℎ𝑘 for someℎ ∈ 𝐻 and 𝑘 ∈ 𝐾. Thus we have 𝑔𝑥𝑔−1 =
𝑔(ℎ𝑘)𝑔−1 (𝑔ℎ𝑔−1 )(𝑔𝑘𝑔−1 )
∈ 𝐻𝐾. This proves that 𝐻𝐾 ∈ 𝑁(𝐺). Also 𝐻 , 𝐾 ⊆ 𝐻𝐾; therefore 𝐻𝐾 is upper bound of 𝐻
and 𝐾 and note that if 𝐿 is any upper bound of 𝐻 and 𝐾 then 𝐻 ⊆ 𝐿 and 𝐾 ⊆ 𝐿; which implies
𝐻𝐾 ⊆ 𝐿. Thus 𝐻𝐾 is the smallest subgroup containing both 𝐻 and 𝐾 which implies
that 𝑠𝑢𝑝 {𝐻, 𝐾} = 𝐻𝐾 ={ hk ∣ℎ ∈ 𝐻 and 𝑘 ∈ 𝐾 }. Thus 𝑁(𝐺) forms a lattice.

Example 2.1.25: We draw the Hasse diagram of the lattice of subgroups of the alternating
group 𝒜 4.

Proof: 𝒜 4 is the alternating group on 4 letters, that is, it is the set of all even permutations.
𝒜4 ={(1), (12)(34), (13)(24), (14)(23), (123), (132), (124), (142), (134), (143),(234),(24
3)}which totals to 12 elements so by Lagrange’s Theorem the subgroups of 𝒜 4 should have
order 1,2,3,4,6 and 12. The subgroups of order 1 and order 12 are trivial.
The subgroups of order 2 are {1, (1 2), (3 4)}, {1, (1 3), (2 4)}, {1, (1 4), (2 3)}. Subgroups of
order 3 are {1, (2 3 4), (2 4 3)}, {1, (1 3 4)(1 4 3)}, {1, (1 2 4) (1 4 2)} and
{1, (1 2 3)(1 3 2)}.
Subgroup of order 4 is {1, (1 2)(3 4), (1 3)(2 4), (1 4)(2 3)}. 𝒜 4 has no subgroup of order 6.
Now let 𝑆𝑢𝑏 𝒜 4 denotes the subgroups of alternating group 𝒜 4, then;
𝑆𝑢𝑏𝒜4 =
{1, {1, (1 2), (3 4)}, {1, (1 3), (2 4)}, {1, (1 4), (2 3)}{1, (2 3 4)(2 4 3)}, {1, (1 3 4)(1 4 3)},
{1, (1 2 4)(1 4 2)}, {1, (1 2)(3 4), (1 3)(2 4), (1 4)(2 3)}, 𝒜 4}. Thus the subgroup lattice of
the alternating group 𝒜 4 is as follows:

𝒜4

2.2 Down-set lattices


If 𝐸 is an ordered set and 𝐴, 𝐵 are down sets of 𝐸 then clearly so also are 𝐴 ∩ 𝐵 and 𝐴 ∪ 𝐵.
Also 𝐴 ∩ 𝐵 is the largest down-set contained in both 𝐴 and 𝐵, therefore 𝑖𝑛𝑓{𝐴, 𝐵} = 𝐴 ∩ 𝐵
and similarly 𝑠𝑢𝑝 {𝐴, 𝐵} = 𝐴 ∪ 𝐵; thus the set of down-sets of 𝐸 forms a lattice. We shall
denote this lattice by 𝒪(𝐸 ).
We recall from the definition of a down-set that we include the empty subset as a down set.
Thus the lattice 𝒪(𝐸 ) is bounded with top element 𝐸 and bottom element ∅.

Example 2.2.1: Consider the set with the following Hasse diagram as shown below

Here we have; 𝑎 ↓ = {𝑎}, 𝑏 ↓ = {𝑏}, 𝑐 ↓ = {𝑎, 𝑐} , 𝑑 ↓ = {𝑎, 𝑑}, 𝑒 ↓ = {𝑏, 𝑒}.


Therefore 𝒪(𝐸) = {𝜙, {𝑎}, {𝑏}, {𝑎, 𝑐}, {𝑎, 𝑏, 𝑑}, {𝑏, 𝑒}, 𝐸, }.
Thus the Hasse diagram of the subsets of 𝒪(𝐸) is given as;

Down-set lattices will be of considerable interest to us later. For the moment we shall consider
how to compute the cardinality of 𝒪(𝐸) when the ordered set 𝐸 is finite. Upper and Lower
bounds for this are provided by the following result;

Theorem 2.2.2: If 𝐸 is a finite ordered set with ∣ 𝐸 ∣ = 𝑛 then;


𝑛 + 1 ≤ ∣ 𝒪(𝐸 ) ∣ ≤ 2𝑛 .

Proof: Since 𝒪(𝐸)⊆ ℙ(𝐸) so | 𝒪(𝐸)| ≤ |ℙ(E)| = 2n . Thus | 𝒪(𝐸)| ≤ 2n . We need to only
show that 𝑛 + 1 ≤ ∣ 𝒪(𝐸 ) ∣. We have two cases to consider.
Case 1: If 𝐸 is a chain.
Clearly 𝐸 has the least number of down sets when it is a chain, in which case 𝒪(𝐸) is also a
chain. We prove by induction on 𝑛 that in this case 𝑛 + 1 ≥ |𝒪(𝐸 )|.
For 𝑛 = 1, the result is trivial, for 𝑛 = 2, let 𝐸 = {𝑥1 , 𝑥2 }, since 𝐸 is a chain either 𝑥1 ≤ 𝑥2
or 𝑥2 ≤ 𝑥1 . Now ℙ(𝐸) = {𝜙, {𝑥1 }, {𝑥2 }, 𝐸}; therefore down-sets of 𝐸 are ∅, {𝑥1 }↓ = {𝑥1 , 𝑥2 },
{𝑥2 }↓ = {𝑥1 , 𝑥2 } and 𝐸. Thus |𝒪(𝐸)| = 3 = 2 + 1. so the result is true for 𝑛 = 2. Assume that
the result is true for all ordered sets having cardinality less than |𝐸|. We prove that the result
holds for n also so let ∣ 𝐸 ∣= 𝑛.
Let 𝐸1 be an ordered set such that |𝐸1 | = 𝑛 − 1. By induction hypothesis the result is true for
𝐸1 i,e., | 𝒪(𝐸1 )| ≥ n. Now |𝒪(𝐸)| = |𝒪(𝐸1 )| + 1 ≥ n + 1. Thus the result is true for ∣ 𝐸 ∣= 𝑛
also.
Case 2: If 𝐸 is anti-chain.
We show by induction ∣ 𝒪(𝐸 ) ∣ ≤ 2𝑛 where 𝑛 = ∣ 𝐸 ∣. For 𝑛 = 2 we have 𝐸 = {𝑥1 , 𝑥2 } this
implies ℙ(𝐸 ) = {𝜙, {𝑥1 }, {𝑥2 }, 𝐸 }.Thus ∣ 𝒪 (𝐸) ∣= 4 = 22 =|ℙ(𝐸)|. So the result is true for
𝑛 = 2. Assume that the result holds for all ordered sets having cardinality less than |𝐸|. Let 𝐸1
be the ordered set such that |𝐸1 | = 𝑛 − 1|. By induction hypothesis ∣ 𝒪(𝐸1 ) ∣ ≤ 2𝑛−1 . Now
|𝒪(𝐸)| = |𝒪(𝐸1 )| + 1 ≤ 2n−1 + 1 ≤ 2𝑛 . Thus by induction 𝐸 has the greatest number of
down-sets when it is an anti-chain in which case 𝒪(𝐸 ) = 𝑃(𝐸 ) which is of cardinality 2n.
Thus 𝑛 + 1 ≤ ∣ 𝒪(𝐸 ) ∣ ≤ 2𝑛 .

In certain cases ∣𝒪(𝐸)∣ can be calculated by using an ingenious algorithm that we shall
describe. For this purpose, we shall denote by 𝐸\{𝑥} the ordered set obtained from 𝐸 by
deleting the element 𝑥 and related comparabilities resulting from transitivity through 𝑥.

Example 2.2.3: Consider the lattice 𝐿 given by the following Hasse diagram.

Therefore, 𝐸\{𝑥} can be obtained by deleting the element 𝑥 and related comparabilities. Thus
the Hasse diagram of 𝐸\{𝑥} is:

Definition 2.2.4: We shall also use the notation 𝑥↕ to denote the cone through 𝑥, namely the
set of elements that are comparable to 𝑥, formally;
𝑥 ↕ = 𝑥 ↓ ∪ 𝑥 ↑ = {𝑦 ∈ 𝐸 ∶ 𝑦 ∦ 𝑥}.

Example 2.2.5: If 𝐿is as in above example then𝐿/𝑥 ↕ is a singleton as; 𝑥 ↕ = {𝑦 ∈ 𝐿: 𝑦 ∦ 𝑥} =


{𝑏, 𝑐, 𝑑, 𝑒, 𝑥} and 𝐿 = {𝑎, 𝑏, 𝑐, 𝑑, 𝑒, 𝑥}, thus 𝐿 ∖ 𝑥 ↕ = {𝑎} which is singleton.

Definition 2.2.6: We say that 𝑥 ∈ 𝐸 is maximal if there is no 𝑦 ∈ 𝐸 such that 𝑦 > 𝑥.

Definition 2.2.7: An element 𝑥 ∈ 𝐸 is called minimal element of 𝐸 if there exist no element


𝑦 ∈ 𝐸 such that 𝑦 < 𝑥.
Clearly a top (bottom) element can be characterized as a unique maximal (minimal) element.

We now give an alternate formula for ∣ 𝒪(𝐸 ) ∣ via the concept of cone and maximal and
minimal elements as described above.

Theorem 2.2.8: (Berman-Kohler) If 𝐸 is a finite ordered set then;


∣ 𝒪(𝐸 ) ∣=∣ 𝒪(𝐸 \𝑥) ∣ +∣ 𝒪(𝐸 \𝑥 ↕ ) ∣.

Proof: let 𝑋 be a non-empty down-set of 𝐸 and let 𝑆 = {𝑥1 , … , 𝑥𝑘 } be the set of maximal
element of 𝐸 then clearly S is unique anti-chain corresponding to down set 𝑋. (𝑥i’s are not
related to each other). Again let 𝐹 be any anti-chain in 𝐸, that is; for all𝑥, 𝑦 ∈ 𝐹 𝑥 ∥ 𝑦, that
is 𝐹 is the set of maximal elements of any subset of 𝐸. This implies 𝐹 ↓ = 𝐹. Thus every non
empty down set 𝑋 of 𝐸 is uniquely determined by an anti-chain in 𝐸. Counting ∅ as an anti-
chain, we thus see that ∣ 𝒪(𝐸 ) ∣ is the number of anti-chains in 𝐸. For any given element 𝑥 of
𝐸, this can be expressed as the number of anti-chains that contain 𝑥 plus that do not contain 𝑥.
Now if an antichain 𝐴 contains a particular element 𝑥 of 𝐸, then it contains no other element
of cone 𝑥 ↕ ; for if 𝐴 contains an𝑦 element say 𝑦 of 𝑥↕ then 𝑥 ≤ 𝑦 or 𝑦 ≤ 𝑥. Which is a
contradiction as 𝑥 ∈ 𝐴 and 𝐴 is antichain. Thus ever𝑦 anti-chain that contains 𝑥 determines a
down set of (𝐸 ∖ 𝑥 ↕ ) and conversely we have 𝐸 /𝑥 ↕ = {𝑦 ∈ 𝐸 ∶ 𝑥 ∥ 𝑦}; therefore
𝒪(𝐸 ∖ 𝑥 ↕ ) =set of down sets 𝑜𝑓 𝐸 ∖ 𝑥 ↕ .Let 𝐵 ∈ 𝒪 (𝐸 ∖ 𝑥 ↕ ); that is 𝐵 ⊆ 𝐸 /𝑥 ↕ implies;
𝐵 = {𝑦 ∈ 𝐸 ∶ 𝑦 ∉ 𝑥 ↕ }
= { 𝑦 ∈ 𝐸 ∶ 𝑦 ∉ 𝑥 ↑ or𝑦 ∉ 𝑥 ↓ }
= {𝑦 ∈ 𝐸 ∶ 𝑦 ≰ 𝑥 or 𝑥 ≰ 𝑦}.
This implies 𝐵 is antichain. Hence we see that number of antichains that contain 𝑥 is
precisely∣𝒪 (𝐸 ∖ 𝑥 ↕ ) ∣. Likewise the number of antichains that do not contain 𝑥 are precisely ∣𝒪
(𝐸∖𝑥)∣. Thus the result follows.

Example 2.2.9: We now draw the Hasse diagram of the lattice of the down sets of each of
following ordered sets;

Solution: Let 𝐸 be the ordered set given by the Hasse diagram (1).
Then 𝒪(𝐸) = {∅, {𝑎}, {𝑎, 𝑏}, {𝑎, 𝑏, 𝑐}}; thus the Hasse diagram of down sets of is given as;
Let 𝐹 be the ordered set given by Hasse diagram (2)
Then 𝒪(𝐹) = {𝜙, {𝑎}, {𝑎, 𝑏}, {𝑎 , 𝑏 , 𝑐}, {𝑎, 𝑏, 𝑐, 𝑑, 𝑒 }, {𝑎, 𝑑}}
thus the Hasse diagram o𝑓 down sets of 𝐹 is;

Similarly, the Hasse diagram o𝑓 down sets of (3) is;

Hasse diagram of down sets of (4) is;

Example 2.2.10: If (𝑃1 ; ≤1 )and (𝑃2 ; ≤2 ) are the ordered sets represented by the diagrams (a)
and
(b) respectively. Draw the Hasse diagram of the down sets of 𝑃1 ∪ 𝑃2 .

Define an order on 𝑃1 ∪ 𝑃2 by;


𝑥, 𝑦 ∈ 𝑃1 𝑎𝑛𝑑𝑥 ≤1 𝑦,
𝑥 ≤ 𝑦 if and only if {
𝑥, 𝑦 ∈ 𝑃2 𝑎𝑛𝑑𝑥 ≤2 𝑦.

Note that 𝑃1 ∪ 𝑃2 = {𝑎 , 𝑏 , 𝑥 , 𝑦 , 𝑧}. We have 𝑎↓ = {𝑎}, 𝑏 ↓ = {𝑎, 𝑏}, 𝑥 ↓ = {𝑥}, 𝑦 ↓ = {𝑥, 𝑦},
𝑧 ↓ = {𝑥, 𝑧}. The down set lattice 𝒪(𝑃1 ∪ 𝑃2 ) is {𝜙, {𝑥}, {𝑎}, {𝑎, 𝑏}, {𝑥, 𝑧}, {𝑦, 𝑧}}. Thus the
required Hasse diagram is;

2.3 Sublattices
As we have seen in the previous section that important sub-structures of an ordered set are the
down-sets and the principal down sets. In this section we consider another type of substructure
of semilattices.

Definition 2.3.1: By a ∧-subsemilattice of a meet semilattice 𝐿 we mean a non-empty subset


𝐸 of 𝐿 that is closed under the meet operation, in the sense that if 𝑥, 𝑦 ∈ 𝐸 then 𝑥 ∧ 𝑦 ∈ 𝐸.
A ∨-subsemilattice of a join semilattice is defined dually.

Definition 2.3.2: By a sublattice of a lattice we mean a subset that is both a meet-


subsemilattice and join-subsemilattice.

Example 2.3.3: If 𝑉 is a vector space and if 𝑆𝑢𝑏 𝑉 denotes the set of subspaces of 𝑉, then
( 𝑆𝑢𝑏 𝑉; ⊆) is easily seen to be an ordered set under inclusion. Suppose 𝐴, 𝐵 ∈ 𝑆𝑢𝑏 𝑉 we have
𝐴 ∩ 𝐵 ⊆ 𝐴 and 𝐴 ∩ 𝐵 ⊆ 𝐵, therefore 𝐴 ∩ 𝐵 is the lower bound of {𝐴, 𝐵}. Suppose 𝑊 be any
subspace of 𝑉 such that 𝑊 ⊆ 𝐴 and 𝑊 ⊆ 𝐵 then 𝑊 ⊆ 𝐴 ∩ 𝐵; therefore 𝐴 ∩ 𝐵 is the biggest
subspace that is contained in both 𝐴 and 𝐵. Thus 𝐴 ∩ 𝐵 = 𝑖𝑛𝑓{𝐴, 𝐵}. Therefore 𝑆𝑢𝑏 𝑉 is a
meet subsemilattice of the lattice ℙ(𝑉).

Example 2.3.4: For every ordered set 𝐸, the lattice 𝒪(𝐸 ) of down sets of 𝐸 is the sublattice
of the lattice ℙ(𝐸 ), since 𝒪(𝐸 ) ⊆ ℙ(𝐸 ) and for any 𝐴, 𝐵 ∈ 𝒪 (𝐸 ); clearly 𝐴 ∩ 𝐵 ⊆ 𝐸. Let
𝑥 ∈ 𝐴 ∩ 𝐵 and 𝑦 ∈ 𝐸 with the property that 𝑦 ≤ 𝑥 then 𝑥 ∈ 𝐴 and 𝑥 ∈ 𝐵 and 𝑦 ∈ 𝐸 with 𝑦 ≤
𝑥. Since 𝐴 and 𝐵 are down sets, by definition of down-set 𝑦 ∈ 𝐴 and 𝑦 ∈ 𝐵; this gives 𝑦 ∈ 𝐴 ∩
𝐵. Thus 𝐴 ∩ 𝐵 ∈ 𝒪 (E). Likewise we can show that 𝐴 ∪ 𝐵 ∈ 𝒪(𝐸 ). Thus 𝒪(𝐸 ) is a sublattice
of ℙ(𝐸 ).

Particularly important sublattices of a lattice are those defined as:

Definition 2.3.5: By an ideal of a lattice, we shall mean a sublattice 𝐽 of 𝐿 that is also a down
set. Dually by a filter of 𝐿 we mean a sublattice that is also an upset.
Next we prove that the set of ideals of a lattice is again a lattice.

Theorem 2.3.6: If 𝐿 is a lattice then ordered by the set inclusion, the set 𝔗 (𝐿) of ideals of 𝐿
forms a lattice in which the lattice operations are given by

in𝑓{𝐽, 𝐾) = 𝐽 ∩ 𝐾;
𝑠𝑢𝑝 {𝐽, 𝐾} = {𝑥 ∈ 𝐿 : there exists 𝑗 ∈ 𝐽 and 𝑘 ∈ 𝐿 𝑥 ≤ 𝑗 ∨ 𝑘}.

Proof: 𝐿et 𝐽, 𝐾 be ideals of 𝐿 and let 𝑎, 𝑏 ∈ 𝐽 ∩ 𝐾 then 𝑎, 𝑏 ∈ 𝐽 and 𝑎, 𝑏 ∈ 𝐾; since 𝐽, 𝐾are


ideals of 𝐿, which implies 𝑎 ∧ 𝑏, 𝑎 ∨ 𝑏 ∈ 𝐽 and 𝑎 ∨ 𝑏, 𝑎 ∧ 𝑏 ∈ 𝐾, this gives 𝑎 ∧ 𝑏 ∈ 𝐽 ∩
𝐾 and 𝑎 ∨ 𝑏 ∈ 𝐽 ∩ 𝐾. Also if 𝑎 ∈ 𝐿 and 𝑏 ∈ 𝐽 ∩ 𝐾 with 𝑎 ≤ 𝑏, then 𝑎 ∈ 𝐽 ∩ 𝐾. Thus 𝐽 ∩ 𝐾
is also an ideal of 𝐿 and if 𝑊 is any other ideal such that 𝑊 ⊆ 𝐽 and 𝑊 ⊆ 𝐾 then 𝑊 ⊆ 𝐽 ∩ 𝐾.
. Hence 𝑖𝑛𝑓{𝐽, 𝐾} exists in 𝔗 (𝐿) and is 𝐽 ∩ 𝐾.
Let 𝑀 = {𝑥 ∈ 𝐿 ∣ 𝑥 ≤ 𝑗 ∨ 𝑘 for some 𝑗 ∈ 𝐽 and 𝑘 ∈ 𝐾}.We claim that 𝑀 is an ideal of 𝐿.
Suppose 𝑥1 , 𝑥2 ∈ 𝑀 then 𝑥1 ≤ 𝑗1 ∨ 𝑘1 and 𝑥2 ≤ 𝑗2 ∨ 𝑘2 for some 𝑗1 , 𝑗2 ∈ 𝐽 and 𝑘1 , 𝑘2 ∈ 𝐾.
Now 𝑥1 ∧ 𝑥2 ≤ 𝑥1 ≤ 𝑗1 ∨ 𝑘1 for some 𝑗1 ∈ 𝐽 and 𝑘1 ∈ 𝐾. Therefore 𝑥1 ∧ 𝑥2 ∈ 𝑀. Also
𝑥1 ∨ 𝑥2 ≤ (𝑗1 ∨ 𝑘1 ) ∨ (𝑗2 ∨ 𝑘2 ) = (𝑗1 ∨ 𝑗2 ) ∨ (𝑘1 ∨ 𝑘2 ) where 𝑗1 ∨ 𝑗2 ∈ 𝐽 and 𝑘1 ∨ 𝑘2 ∈ 𝐾;
this implies 𝑥1 ∨ 𝑥2 ∈ 𝑀. This shows that 𝑀 is a sublattice of 𝐿. To show 𝑀 is a down set of
𝐿, suppose 𝑎 ∈ 𝑀 such that 𝑧 ≤ 𝑎 for some 𝑧 ∈ 𝐿, but 𝑎 ∈ 𝑀 so 𝑎 ≤ 𝑗 ∨ 𝑘 for some 𝑗 ∈ 𝐽
and 𝑘 ∈ 𝐾; this implies 𝑧 ≤ 𝑗 ∨ 𝑘 for some 𝑗 ∈ 𝐽 and 𝑘 ∈ 𝐾; which implies 𝑧 ∈ 𝑀. Thus 𝑀 ∈
𝔗 (𝐿) . Now we claim that = sup{𝐽, 𝐾}. Suppose 𝑗 ∈ 𝐽 then 𝑗 ≤ 𝑗 ∨ 𝑘 for any 𝑘 ∈ 𝐾 thus 𝑗 ∈
𝑀 so 𝐽 ⊆ 𝑀. In a similar manner we can show that 𝐾 ⊆ 𝑀. This shows that 𝑀 is the upper
bound of 𝐽 and 𝐾. Let 𝑁 ∈ 𝔗 (𝐿) such that 𝐽 ⊆ 𝑁 and 𝐾 ⊆ 𝑁 we show that 𝑀 ⊆ 𝑁. Let 𝑚 ∈ 𝑀
then 𝑚 ≤ 𝑗 ∨ 𝑘 for some 𝑗 ∈ 𝐽 and 𝑘 ∈ 𝐾. Since 𝑗 ∈ 𝐽 ⊆ 𝑁, 𝑘 ∈ 𝐾 ⊆ 𝑁; this gives 𝑗, 𝑘 ∈ 𝑁.
But 𝑁 is also a down set and 𝑚 ≤ 𝑗 ∨ 𝑘 ∈ 𝑁; this gives 𝑚 ∈ 𝑁 so 𝑀 ⊆ 𝑁. Thus 𝑀 =
𝑠𝑢𝑝 {𝐽, 𝐾}.

Note: From the above theorem although the ideal lattice 𝔗(𝐿) is a meet subsemilattice of 𝒪(𝐿)
since intersection of two ideals is again an ideal. It is not sublattice since union of two ideals
need not be an ideal. This situation in which a subsemilattice of a given lattice 𝐿 that is not a
sublattice of 𝐿 can also form a lattice with respect to the same order as 𝐿, is quite common in
lattice theory. Another instance of this has been seen before in Example 2.1.21 where the set
Sub V of subspaces of a vector space V forms a lattice in which 𝑖𝑛𝑓{𝐴, 𝐵} = 𝐴 ∩ 𝐵and
𝑠𝑢𝑝{𝐴, 𝐵} = 𝐴 + 𝐵, so that (𝑆𝑢𝑏 𝑉 ; ⊆) forms a lattice that is a∩-subsemilattice, but not a
sublattice, of(𝑃(𝑉); ⊆). As we shall now see, afurther instance is provided by a closure
mapping on a lattice.

Definition 2.3.7: An isotone mapping 𝑓: 𝐸 → 𝐸 is a closure on 𝐸 if it is such that 𝑓 = 𝑓 2 ≥


𝑖𝑑𝐸 ; and a dual closure if 𝑓 = 𝑓 2 ≤ 𝑖𝑑𝐸 .
Theorem 2.3.8: Let 𝐿 be a lattice and let 𝑓: 𝐿 → 𝐿be a closure. Then 𝐼𝑚𝑓 is a lattice in which
the lattice operations are given 𝑏y
𝑖𝑛𝑓{𝑎, 𝑏} = 𝑎 ∧ 𝑏 and 𝑠𝑢𝑝{𝑎 , 𝑏} = 𝑓(𝑎 ∨ 𝑏)

Proof: Suppose that 𝑓: 𝐿 → 𝐿 is a closure, let 𝑥 ∈ 𝐼𝑚𝑓 then 𝑥 = 𝑓(𝑦) for some 𝑦 ∈ 𝐿. Since
𝑓 is isotone so we obtain 𝑓(𝑥) = 𝑓 2 (𝑦) = 𝑓(𝑦) (by definition of closure mapping); this gives
𝑓(𝑥) = 𝑥. Consequently we see that; 𝐼𝑚𝑓 = { 𝑥 ∈ 𝐿 ∶ 𝑓(𝑥) = 𝑥}. If 𝑎, 𝑏 ∈ 𝐼𝑚 𝑓 then
𝑓(𝑎) = 𝑎 and 𝑓(𝑏) = 𝑏. Since 𝑓 is isotone and 𝑓 = 𝑓 2 ≥ 𝑖𝑑𝐿 ,
We have 𝑓(𝑎) ∧ 𝑓(𝑏) = 𝑎 ∧ 𝑏 ≤ 𝑓(𝑎 ∧ 𝑏); ( since 𝑓 = 𝑓 2 ≥ 𝑖𝑑𝐿 ) (1)
Also 𝑎 ∧ 𝑏 ≤ 𝑎 and 𝑎 ∧ 𝑏 ≤ 𝑏 and 𝑓 is isotone, implies 𝑓(𝑎 ∧ 𝑏) ≤ 𝑓(𝑎) and 𝑓(𝑎 ∧ 𝑏) ≤
𝑓(𝑏).
This gives 𝑓(𝑎 ∧ 𝑏) ≤ 𝑓(𝑎) ∧ 𝑓(𝑏) (by connecting lemma). (2)
Combining (1) and (2) we get 𝑓(𝑎 ∧ 𝑏) = 𝑎 ∧ 𝑏. Thus 𝑎 ∧ 𝑏 ∈ 𝐼𝑚𝑓. It follows that Im𝑓 is a
meet subsemilattice of 𝐿.
As for the supremum in 𝐼𝑚𝑓 of 𝑎, 𝑏 ∈ 𝐼𝑚𝑓, we have 𝑎 ≤ 𝑎 ∨ 𝑏 and 𝑏 ≤ (𝑎 ∨ 𝑏), since 𝑓 is
isotone so 𝑓(𝑎) ≤ 𝑓(𝑎 ∨ 𝑏) and 𝑓(𝑏) ≤ 𝑓(𝑎 ∨ 𝑏), this gives 𝑓(𝑎) ∨ 𝑓(𝑏) = 𝑎 ∨ 𝑏 ≤ 𝑓(𝑎 ∨ 𝑏)
and so 𝑓(𝑎 ∨ 𝑏) ∈ 𝐼𝑚𝑓 is an upper bound of {𝑎, 𝑏}. Suppose now that 𝑐 = 𝑓(𝑐) ∈ 𝐼𝑚 𝑓 is
any other upper bound of {𝑎, 𝑏} in Im𝑓, then 𝑎 ≤ 𝑐 and 𝑏 ≤ 𝑐; this gives 𝑎 ∨ 𝑏 ≤ 𝑐. By
isotonicity of 𝑓 we obtain 𝑓(𝑎 ∨ 𝑏) ≤ 𝑓(𝑐) = 𝑐. Thus in the subset 𝐼𝑚𝑓, the upper bound
𝑓(𝑎 ∨ 𝑏) is less than or equal to every upper bound of {𝑎, 𝑏}. Consequently 𝑠𝑢𝑝 {𝑎, 𝑏} exists
in Im𝑓 and is 𝑓(𝑎 ∨ 𝑏 ).

2.4 Lattice morphisms


In this section we discuss lattice morphisms. These are the morphisms of ordered sets that
preserves the operations of meet and join.

Definition 2.4.1: If 𝐿 and 𝑀 are join semilattices then 𝑓: 𝐿 → 𝑀 is said to be a join morphism
if 𝑓( 𝑥 ∨ 𝑦 ) = 𝑓(𝑥) ∨ 𝑓(𝑦) for all 𝑥, 𝑦 ∈ 𝐿.
Dually if 𝐿 and 𝑀 are meet semilattices then 𝑓: 𝐿 → 𝑀 is said to be a meet morphism if
𝑓( 𝑥 ∧ 𝑦 ) = 𝑓(𝑥) ∧ 𝑓(𝑦) for all 𝑥, 𝑦 ∈ 𝐿.

Definition 2.4.2: If 𝐿 and 𝑀 are lattices then 𝑓: 𝐿 → 𝑀 is a lattice morphism if it is both a join
morphism and a meet morphism.

Definition 2.4.3: If 𝐿 and 𝑀 are join semilattices then a mapping 𝑓: 𝐿 → 𝑀 is said to be a


complete join morphism if for every family (𝑥𝛼 )𝛼∈𝐼 of elements of 𝐿 such that ⋁𝛼∈𝐼 𝑥𝛼 exists
in 𝐿, ⋁𝛼∈𝐼 𝑓(𝑥𝛼 )exists in 𝑀 and 𝑓(∨𝛼∈𝐼 𝑥𝛼 ) = ∨𝛼∈𝐼 𝑓(𝑥𝛼 ). The notion of complete meet
morphism is defined dually.

Theorem 2.4.4: If 𝐿 and 𝑀 are join semilattices then every residuated mapping 𝑓: 𝐿 → 𝑀 is a
complete join morphism.
Proof: Suppose that (𝑥𝛼 )𝛼∈𝐼 is a family of elements of 𝐿 such that 𝑥 = ⋁𝛼∈𝐼 𝑥𝛼 exists in 𝐿, for
each 𝛼 ∈ 𝐼 . We have 𝑥 ≥ 𝑥𝛼 and since 𝑓 is residuated implies 𝑓 is isotone, thus 𝑓(𝑥) ≥
𝑓(𝑥𝛼 ) for each 𝛼 ∈ 𝐼; if 𝑦 ≥ 𝑓(𝑥𝛼 ) for each 𝛼 ∈ 𝐼, then by the fact that 𝑓 + is isotone we
have 𝑓 + (𝑦) ≥ 𝑓 + (𝑓(𝑥𝛼 )). Since 𝑓 is residuated 𝑓 + ⃘𝑓 ≥ 𝑖𝑑𝐿 therefore 𝑓 + (𝑦) ≥ 𝑥𝛼 for each
𝛼 ∈ 𝐼 and so 𝑓 + (𝑦) ≥ ∨𝛼∈𝐼 𝑥 𝛼 = 𝑥, this implies 𝑓 + (𝑦) ≥ 𝑥, since 𝑓 is residuated therefore
𝑦 ≥ 𝑓( 𝑓 + (𝑦)) ≥ 𝑓(𝑥). Thus we see that ⋁𝛼∈𝐼(𝑥)𝛼 exists and is 𝑓(𝑥).

Definition2.4.5: We shall say that lattices 𝐿 and 𝑀 are isomorphic if they are isomorphic as
ordered sets.

Theorem 2.4.6: Lattices 𝐿 and 𝑀 are isomorphic if and only if there is a bijection 𝑓: 𝐿 → 𝑀
that is a ∨-morphism.

Proof: Suppose 𝐿 ≃ 𝑀, then by definition there is residuated bijection 𝑓: 𝐿 → 𝑀. So by above


Theorem 𝑓 is a join morphism. Conversely suppose that 𝑓 ∶ 𝐿 → 𝑀 is a bijection and a join
morphism then we have;
𝑥 ≤ 𝑦 if and only if 𝑥 ∨ 𝑦 = 𝑦 (by connecting lemma)
if and only if 𝑓( 𝑥 ∨ 𝑦) = 𝑓(𝑦 ) (since 𝑓 is isotone)
if and only if 𝑓(𝑦) = 𝑓(𝑥) ∨ 𝑓(𝑦)
if and only if 𝑓(𝑥) ≤ 𝑓(𝑦) (by connecting lemma).
So that 𝐿 is isomorphic to 𝑀.

Example 2.4.7: Let 𝐿 be lattice, then every isotone mapping from 𝐿 to 𝑀 is a lattice morphism
if and only if 𝐿 is a chain.

Proof: Suppose that 𝐿 is a chain we show that 𝑓: 𝐿 → 𝑀 is lattice morphism . Let 𝑥, 𝑦 ∈ 𝐿,


Then either 𝑥 ≤ 𝑦 or 𝑦 ≤ 𝑥, without loss of generality suppose 𝑥 ≤ 𝑦 then 𝑓(𝑥) ≤ 𝑓(𝑦),
implies 𝑓(𝑦) = 𝑓(𝑥) ∨ 𝑓(𝑦). Also 𝑥 ≤ 𝑦 implies 𝑦 = 𝑥 ∨ 𝑦, which implies 𝑓(𝑥 ∨ 𝑦) =
𝑓(𝑥) ∨ 𝑓(𝑦). Similarly we can show that 𝑓(𝑥 ∧ 𝑦) = 𝑓(𝑥) ∧ 𝑓(𝑦). So 𝑓 is a join morphism.
Thus 𝑓 is a lattice morphism.
To prove the converse, suppose that 𝐿 is not a chain then there exists 𝑎, 𝑏 ∈ 𝐿such that 𝑎 ≰ 𝑏
or 𝑏≰𝑎. Suppose 𝑎≰𝑏, since 𝑓 is isotone so 𝑓(𝑎) ≰ 𝑓(𝑏) ≤ 𝑓(𝑎 ∨ 𝑏) = 𝑓(𝑎) ∨ 𝑓(𝑏). Which
is a contradiction hence our supposition is wrong. Thus 𝐿 is a chain.

2.5 Complete lattice


We have seen that in a meet semilattice the infimum of ever𝑦 finite subset exists. We now
extend this concept to arbitrary subsets. We have seen that lattices are nicer structure then the
posets because they allow us to take meet and join for any pair of elements in a set. What if we
wanted to take the meet and join of arbitrary sets? Complete lattice allows us to do exactly
that. All finite lattices are complete, so this is important only for infinite lattices. In this section
we first discuss complete lattice and show many ways in which complete lattice arise in
mathematics, next we consider the question; what if the given poset is not complete or even a
lattice? Can we embed it into a complete lattice, this brings us to the notion of lattice
completion which is useful for both finite and infinite posets?

Definition 2.5.1ꓽ A ∧-semilattice 𝐿 is said to be ∧-complete if ever𝑦 subset 𝐸 = {𝑥𝛼 : 𝛼 ∈ 𝐴}


of 𝐿 has an infimum which we denoted by 𝑖𝑛𝑓𝐿 𝐸 or by ⋀𝛼∈𝐴 𝑥𝛼 . Dually we can define ∨-
complete ∨–semilattice in which we use the notation 𝑠𝑢𝑝𝐿 𝐸 or ⋁𝛼∈𝐴 𝑥𝛼 .

Definition 2.5.2: A Lattice is said to be complete if it is both ∧-complete and ∨-complete.

Theorem 2.5.3ꓽ Ever𝑦 complete lattice has a top and bottom element.

Proofꓽ Let 𝐿 be a complete lattice then by definition any arbitrary subset 𝑀 of 𝐿 has both
supremum and infimum. Now taking 𝐿 = 𝑀 we have 𝑠𝑢𝑝𝐿 𝐿 is the top element of 𝐿 and 𝑖𝑛𝑓𝐿 𝐿
is the bottom element.
The following lemma provides us a way of showing that a lattice is complete by only proving
that the infimum exists, saving us half the work.

Lemma 2.5.4: (Half-work Lemma) A poset 𝑃 is a complete lattice if and only if 𝑖𝑛𝑓(𝑆) exists
for every 𝑆 ⊆ 𝑃.

Proof: The forward direction is trivial.


To prove converse we need to prove that 𝑠𝑢𝑝(𝑆) exists for every 𝑆 ⊆ 𝑃. To do so, we will
formulate 𝑠𝑢𝑝(𝑆) in terms of the infimum of some other set.
Consider the set 𝑇 of upper bounds of 𝑆 i.e., 𝑇 = {𝑥 ∈ 𝑋 |𝑠 ∈ 𝑆 and 𝑠 ≤ 𝑥}
Now let 𝑎 = 𝑖𝑛𝑓(𝑇). We claim that 𝑎 = 𝑠𝑢𝑝(𝑆). From the definition of 𝑇, we get that for all
𝑠 ∈ 𝑆 and for all 𝑡 ∈ 𝑇 such that 𝑠 ≤ 𝑡. Since 𝑎 = 𝑖𝑛𝑓(𝑇), it follows that for all 𝑠 ∈
𝑆 such that 𝑠 ≤ 𝑎. Thus, 𝑎 is an upper bound of 𝑆. Further, for any upper bound 𝑡 of 𝑆, we
know that 𝑡 ∈ 𝑇. Therefore, 𝑎 ≤ 𝑡 because 𝑎 = 𝑖𝑛𝑓(𝑇). Thus, 𝑎 = 𝑠𝑢𝑝(𝑆).

Note: Note that the set T in the proof may be empty. In that case 𝑎 would be the top element
of

Illustration of Half Work Lemma


Example 2.5.5: For every non empty set 𝐸the power set lattice ℙ(𝐸) is complete. The top
element is 𝐸 and the bottom element is ∅.

Solution: We have already seen that ℙ(𝐸) is a bounded lattice with top element 𝐸 and bottom
element ∅. Let 𝑋 ⊆ ℙ(𝐸) and let 𝐵 be any upper bound of 𝑋 with respect to ⊆. This means
𝐶 ⊆ 𝐵 for each 𝐶 ∈ 𝑋. But then ∪𝐶∈𝑋 𝐶 ⊆ 𝐵 as well for all 𝐶 ∈ 𝑋 (let 𝑥∈∪𝐶∈𝑋 𝐶 so𝑥 = 𝑐 ́ for
some 𝑐 ́ ∈ 𝑋 and then 𝑐 ́ ⊆ 𝐵 so 𝑥 ∈ 𝐵). So the union is smaller than any other upper bound,
hence b𝑦 definition ∪𝐶∈𝑋 𝐶 = 𝑠𝑢𝑝𝑋. A dual argument can be held for the intersection, showing
that ℙ(𝐸) is a complete lattice.

Example 2.5.6: Let 𝐿 be a lattice that is formed by adding a chain (𝑄, ≤) of rationals a top
element ∞ and a bottom element −∞ then 𝐿 is bounded but is not complete.

Since 𝐿 has top element and bottom element, so it is bounded. But 𝐿 is not complete as if we
consider the set {𝑥∈ 𝑄 ∶ 𝑥 > 0 and 𝑥2≤ 2} this set has no 𝑙𝑢𝑏 as √2 is not rational.

Definition 2.5.7: Let 𝑅 be any relation on set 𝑋 then by 𝑅 𝑒 we mean the smallest equivalence
relation on 𝑋 containing 𝑅. We call it the equivalence relation generated by 𝑅.

Example 2.5.8ꓽ For every non empty set 𝐸 the set 𝐸𝑞𝑢 𝐸 of equivalence relations on 𝐸 is a
complete lattice.

Solution: Suppose 𝐹 = (𝑅𝛼 )𝛼∈𝐴 is a family of equivalence relations on 𝐸. We claim that


𝐼𝑛𝑓𝛼∈𝐴 𝑅𝛼 exists in 𝐸𝑞𝑢 𝐸 and is the relation ⋀𝛼∈𝐴 𝑅𝛼 given by;
⋀𝛼∈𝐴 𝑅𝛼 = ∩𝛼∈𝐴 𝑅𝛼 .
Clearly ∩𝛼∈𝐴 𝑅𝛼 ∈ 𝐸𝑞𝑢 𝐸; since intersection of any family of equivalence relations is again
an equivalence relation. Now we show that it is also meet of (𝑅𝛼 )𝛼∈𝐴 .
Now ∩𝛼∈𝐴 𝑅𝛼 ⊆ 𝑅𝛼 for all 𝛼 ∈ 𝐴; so ∩𝛼∈𝐴 𝑅𝛼 is a lower bound of (𝑅𝛼 )𝛼∈𝐴 . Let 𝜆 ∈ 𝐸𝑞𝑢 𝐸
is another lower bound of (𝑅𝛼 )𝛼∈𝐴 , then by definition 𝜆 ⊆ 𝑅𝛼 for all 𝛼 ∈ 𝐴; which implies
𝜆 ⊆ ∩𝛼∈𝐴 𝑅𝛼 . Thus ⋀𝛼∈𝐴 𝑅𝛼 = ∩𝛼∈𝐴 𝑅𝛼 is the meet of (𝑅𝛼 )𝛼∈𝐴 .
Now we show that 𝑠𝑢𝑝 {(𝑅𝛼 )𝛼∈𝐴 } = (∪𝛼∈𝐴 𝑅𝛼 )𝑒 . For any 𝛼 ∈ 𝐴, 𝑅𝛼 ⊆ ∪𝛼∈𝐴 𝑅𝛼 ⊆
(∪𝛼∈𝐴 𝑅𝛼 )𝑒 ; so (∪𝛼∈𝐴 𝑅𝛼 )𝑒 is an upper bound of (𝑅𝛼 )𝛼∈𝐴 . Let 𝜆 ∈ 𝐸𝑞𝑢 𝐸 is any other upper
bound of (𝑅𝛼 )𝛼∈𝐴 then by definition 𝑅𝛼 ⊆ 𝜆 for all 𝛼 ∈ 𝐴 which implies ∪𝛼∈𝐴 𝑅𝛼 ⊆ 𝜆 .
since (∪𝛼∈𝐴 𝑅𝛼 )𝑒 is the smallest equivalence containing ∪𝛼∈𝐴 𝑅𝛼 so (∪𝛼∈𝐴 𝑅𝛼 )𝑒 ⊆ 𝜆. Thus
𝑠𝑢𝑝 {(𝑅𝛼 )𝛼∈𝐴 } = (∪𝛼∈𝐴 𝑅𝛼 )𝑒 .

The relationship between complete semilattices and complete lattices is highlighted by the
following result.

Theorem 2.5.9: A ⋀-complete ⋀-semilattice is a complete if and only if it has a top element.

Proof: Let 𝐿 be a ⋀-complete ⋀-semilattice. If 𝐿 is complete then 𝑠𝑢𝑝𝐿 𝐿 is the top element of
𝐿. Conversely suppose that 𝐿 be a ⋀-complete ⋀-semilattice with the top element 1. Let 𝑋 =
{𝑥𝛼 ∶ 𝛼 ∈ 𝐴 } be a non empty subset of 𝐿. We show that sup𝐿𝑋 exists. Since 𝐿 has a top element
1 therefore the set of upper bounds 𝑋 ↑ of 𝑋 is non empty. Let 𝑋 ↑ = {𝑚𝛽 | 𝛽 ∈ 𝐵} . Since 𝐿 is
⋀-complete therefore ⋀𝛽∈𝐵 𝑚𝛽 exists, since 𝑚𝛽 is an upper bound of 𝑋 for all 𝛽 ∈ 𝐵. We
have 𝑥α ≤ 𝑚𝛽 for all 𝛼 ∈ 𝐴 𝑎𝑛𝑑 𝛽 ∈ 𝐵. Thus it follows that 𝑥𝛼 ≤ ⋀𝑚𝛽 for all 𝛽 ∈ 𝐵 and for
every 𝑥α ∈𝑋 whenever ⋀𝛽∈𝐵 𝑚𝛽 ∈ 𝑋 ↑ and ⋀𝛽∈𝐵 𝑚𝛽 ≤ 𝑚𝛽 . Hence ∧𝛽∈𝐵 𝑚𝛽 is the supremum
of 𝑋 in 𝐿. Hence 𝐿 is complete.

Example 2.5.10: Let 𝐸 be an infinite set and let 𝑃𝑓 (𝐸) be the set of all finite subsets of 𝐸. We
have already shown that (𝑃𝑓 (𝐸), ⊆) is an ordered set. Let 𝑋 be any arbitrary subset of 𝑃𝑓 (𝐸)
then 𝑋 is finite. Since every finite set is bounded so 𝑋 is bounded. Let 𝐵 be any lower bound
of 𝑋, then 𝐵 ⊆ 𝐶 for all 𝐶 ∈ 𝑋, but then 𝐵 ⊆∩𝐶∈𝑋 𝐶. Since B is arbitrary, so every lower bound
of 𝑃𝑓 (𝐸)is smaller than ∩𝐶∈𝑋 𝐶. Hence by definition it is the greatest lower bound. Therefore
𝑃𝑓 (𝐸)is a finite ∩ −lattice ordered by set inclusion, so it is meet complete. Now 𝑃𝑓 (𝐸) ∪ 𝐸 is
a meet complete lattice with top element E. So by (Theorem 2.5.9) 𝑃𝑓 (𝐸) ∪ 𝐸 is complete.

Example 2.5.11: If 𝐺 is a group and let 𝑆𝑢𝑏 𝐺 be the set of all subgroups of 𝐺, ordered b𝑦 the
set inclusion. Let 𝐻, 𝐾 ∈ 𝑆𝑢𝑏 𝐺 then H∩ 𝐾 ∈ 𝑆𝑢𝑏 𝐺, since 𝐻 ∩ 𝐾 ⊆ 𝐻 and 𝐻 ∩ 𝐾 ⊆ 𝐾,
showing that 𝐻 ∩ 𝐾 is the lower bound of 𝐻 and 𝐾. Let 𝑊 be any other lower bound of 𝐻 and
𝐾 such that 𝑊 ⊆ 𝐻 and 𝑊 ⊆ 𝐾, implies 𝑊 ⊆ 𝐻 ∩ 𝐾. So 𝐻 ∩ 𝐾 is the 𝑔𝑙𝑏 of 𝑆𝑢𝑏 𝐺. Hence
𝑆𝑢𝑏 𝐺 is a ∩-semilattice. To show that it is also meet complete let 𝐻𝜆 ∶ 𝜆 ∈ 𝐴} be any arbitrary
family of subgroups then ∩𝜆∈𝐴 𝐻𝜆 is also a subgroup and ∩𝜆∈𝐴 𝐻𝜆 =∧𝜆∈𝐴 𝐻𝜆 . thus 𝑆𝑢𝑏 𝐺 is a
meet complete meet semilattice with top element 𝐺, so by (Theorem 2.5.9) 𝑆𝑢𝑏 𝐺 is a complete
lattice.

Example 2.5.12: Consider the lattice {𝑁 ∪ {0} ; | }. Since every natural number divides 0, so
0 is the top element and 1 divides every natural number so 1 is bottom element. If 𝑋 is any non
empty subset of 𝑁 ∪ {0}, then as already seen in (Example 2.1.21) 𝑖𝑛𝑓𝑁 𝑋 exists and is the
greatest common devisor of elements of 𝑋. So 𝑁 ∪ {0} is meet complete meet semilattice with
top element 0. Thus by (Theorem 2.5.9) { 𝑁 ∪ {0} ; |} is a complete lattice.

Definition 2.5.13: Let 𝑓: 𝑋 → 𝑋 be a function then 𝑐 ∈ 𝑋 is a fixed point of 𝑓 if 𝑓(𝑐) = 𝑐.

Concerning complete lattices, we have the following remarkable result.

Theorem 2.5.14: (Knaster Fixed Point Theorem) If 𝐿 is a complete lattice and if 𝑓: 𝐿 → 𝐿 is


an isotone mapping, then 𝑓 has a fixed point.
Proof: Consider the set 𝐴 = {𝑥 ∈ 𝐿 ∶ 𝑥 ≤ 𝑓(𝑥)}. Since 𝐿 is complete, it has a bottom element
0 and for 𝑥 ∈ 𝐿, we have 𝑓(𝑥) ∈ 𝐿, clearly 0 ∈ 𝐴 (since 0 is bottom element) and by
completeness of 𝐿 there exists 𝛼 = 𝑆𝑢𝑝𝐿 𝐴. Now for every 𝑥 ∈ 𝐴 we have 𝑥 ≤ 𝛼 and therefore
by the fact that 𝑓 being isotone we have;
𝑥 ≤ 𝑓(𝑥) ≤ 𝑓(𝛼) (since 𝑥 ∈ 𝐴 implies 𝑥 ≤ 𝑓(𝑥)).
This implies that 𝑥 is an upper bound of 𝐴. Hence,
𝛼 = 𝑆𝑢𝑝𝐿 𝐴 ≤ 𝑓(𝛼), so 𝛼 ∈ 𝐴. (1)
Again by the fact that 𝑓is isotone we have 𝑓(𝛼) ≤ 𝑓(𝑓(𝛼)) which implies 𝑓(𝛼) ∈ 𝐴. So
𝑓(𝛼) ≤ 𝛼. (2)
Combining (1) and (2) we get 𝑓(𝛼) = 𝛼.

Definition 2.5.14: We call two sets equipotent if and only if there exists one-to-one function
from one set onto the other.

Next we give lattice theoretic proof of one of the well-known Theorem of set theory namely as
the Schroeder Bernstein Theorem.

Theorem 2.5.15: (Schroeder Bernstein Theorem) If 𝐸 and 𝐹 are the sets and if there are
injections 𝑓: 𝐸 → 𝐹 and 𝑔: 𝐹 → 𝐸 then 𝐸 and 𝐹 are equipotent.

Proof: We use the notation 𝑖𝑋 : 𝑃(𝑋) → 𝑃(𝑋) to denote the antitone mapping that sends every
subset of 𝑋 to its compliment in 𝑋. Consider the mapping 𝜓: 𝑃(𝐸) → 𝑃(𝐸) by

𝜓 = 𝑖𝐸 ∘ 𝑔→ ∘ 𝑖𝐹 ∘ 𝑓.→

Since 𝑓 and 𝑔 are isotone, so also is 𝜓. Since 𝑃(𝐸) is a complete lattice and 𝜓: 𝑃(𝐸) → 𝑃(𝐸)is
isotone, so by Fixed Point Theorem there exists 𝐺 ⊆ 𝐸 such that 𝜓(𝐺) = 𝐺, and therefore,
𝑖𝐸 (𝐺) = (𝑔→ ∘ 𝑖𝐹 ∘ 𝑓 → )(𝐺). This situation may be summarized pictorially

Now since 𝑓 and 𝑔 are injective by hypothesis this configuration shows that we can define a
bijection ℎ: 𝐸 → 𝐹 b𝑦 the prescription;

𝑓(𝑥) 𝑖𝑓 𝑥 ∈ 𝐺;
ℎ(𝑥) = {
the unique element of 𝑔← (𝑥)𝑖𝑓𝑥 ∉ 𝐺 ,

Claim: ℎ is surjective; let 𝑦 ∈ 𝐹 be any element;


Case 1: If 𝑦 ∈ 𝑓 ← (𝐺)then 𝑦 = {𝑓(𝑥): 𝑥 ∈ 𝐺}; this implies there exists𝑥 ∈ 𝐺 such that 𝑦 =
𝑓(𝑥). Since 𝐺 ⊆ 𝐸, there exists 𝑥 ∈ 𝐸 such that 𝑦 = 𝑓(𝑥) = ℎ(𝑥).
Case 2: If 𝑦 ∈ 𝑖𝐹 (𝑓 → (𝐺)) then by definition there exists 𝑧 ∈ 𝑓 → (𝐺) such that 𝑦 = 𝑓(𝑧). Since
𝑓: 𝐸 → 𝐹 is injective therefore 𝑧 ∈ 𝐹 and𝑖𝐹 (𝑧) = 𝑧; this implies 𝑦 = 𝑖𝐹 (𝑧) = 𝑧 and so 𝑦 ∈ 𝐹.
Since 𝑓 → (𝐺) ⊆ 𝐹; this implies 𝑦 ∈ 𝑓 → (𝐺); so by first case there exists some 𝑥 ∈ 𝐸 such that
𝑦 = ℎ(𝑥), which proves that h is surjective.
Claim: ℎ is injective. Let 𝑥, 𝑥´ ∈ 𝐸 such that ℎ(𝑥) = ℎ(𝑥´).
Case 1: If 𝑥, 𝑥ˊ ∈ 𝐺, then; ℎ(𝑥) = 𝑓(𝑥) = 𝑓(𝑥ˊ) = ℎ(𝑥ˊ). Since 𝑓 is injective, so 𝑥 = 𝑥ˊ.
Case 2: If 𝑥 ∈ 𝐺 but 𝑥ˊ ∉ 𝐺, then 𝑥 ∈ 𝐺 implies ℎ(𝑥) = 𝑓(𝑥) and 𝑥ˊ ∉ 𝐺 gives 𝑥ˊ ∈ 𝑖𝐸 (𝐺) this
implies there exists 𝑦 ∈ 𝐺 such that 𝑥ˊ = 𝑖𝐸 (𝑦). But 𝐺 ⊆ 𝐸 which implies 𝑦 ∈ 𝐸; so 𝑥ˊ =
𝑖𝐸 (𝑦) = 𝑦. which is not possible. Thus we omit this case.

Case 3: If 𝑥 ∉ 𝐺 and 𝑥ˊ ∉ 𝐺, then both𝑥 ∈ 𝑖𝐸 (𝐺)and𝑥ˊ ∈ 𝑖𝐸 (𝐺), this implies there exists 𝑦ˊ ∈
𝐺 such that 𝑥 = 𝑖𝐸 (𝑦) and𝑥ˊ = 𝑖𝐸 (𝑦ˊ), then 𝑥 = 𝑦 and 𝑥ˊ = 𝑦ˊ which is not possible.
So by case 1 we conclude that ℎ is injective.

Theorem 2.5.16: (Ward’s Theorem) Let 𝐿 be a complete lattice if 𝑓 is a closure on 𝐿, then


Im 𝑓 is a complete lattice. Moreover for every non-empty subset 𝐴 of Im 𝑓:
𝑖𝑛𝑓𝐼𝑚𝑓 𝐴= 𝑖𝑛𝑓𝐿 𝐴 and 𝑠𝑢𝑝𝐼𝑚𝑓 𝐴 = 𝑓(𝑠𝑢𝑝𝐿 𝐴).

Proof: We first prove that 𝐼𝑚𝑓 is a ⋀-complete ⋀-semilattice. To see this suppose that 𝑓: 𝐿 →
𝐿 is a closure, let 𝑥 ∈ 𝐼𝑚𝑓 then 𝑥 = 𝑓(𝑦) for some𝑦 ∈ 𝐿, since 𝑓 is isotone, so 𝑓(𝑥) = 𝑓 2 (𝑦).
But since 𝑓 is a closure so 𝑓 2 (𝑦) = 𝑓(𝑦) = 𝑥. Thus 𝑓(𝑥) = 𝑥. Therefore Im𝑓 = {𝑥 ∈
𝐿: 𝑓(𝑥) = 𝑥} the set of fixed points of 𝑓. Now suppose that 𝐶 ⊆ 𝐼𝑚𝑓. Since Im𝑓 ⊆ 𝐿 and 𝐿 is
complete, therefore 𝐼𝑛𝑓𝐿 𝐶 exists. So let 𝑎 = 𝑖𝑛𝑓𝐿 𝐶. By definition of infimum for all 𝑥 ∈ 𝐶 we
have 𝑎 ≤ 𝑥; since 𝑓 is isotone therefore 𝑓(𝑎) ≤ 𝑓(𝑥) = 𝑥 (since 𝑥 ∈ 𝐼𝑚𝑓). Thus 𝑓(𝑎) is the
lower bound of 𝐶. Since 𝑎 is the greatest lower bound so we have 𝑓(𝑎) ≤ 𝑖𝑛𝑓𝐿 𝐶 = 𝑎; also
since 𝑎 ∈ 𝐶 this implies 𝑎 ∈ 𝐿 therefore 𝑓(𝑎) = 𝑓 2 (𝑎) ≥ 𝑎 (by definition of closure).
Therefore (𝑎) = 𝑎 , which implies 𝑎 ∈ 𝑖𝑚𝑓. Thus 𝐼𝑚𝑓 is ∧-complete. Since 𝐿 is complete it
has a top element 1, so 𝑓(1) ∈ 𝐿 this implies 𝑓(1) ≤ 1 (since 1 is the top element). Also
since 𝑓is closure, therefore by definition 𝑓 ≥ 𝑖𝑑𝐿 ; this implies 𝑓(1) ≥ 1. Thus 𝑓(1) = 1
so 1 ∈ 𝐼𝑚𝑓. Thus 𝐼𝑚𝑓 is a ⋀-complete ⋀-semilattice with top element 1. Therefore by
Theorem 2.5.7, 𝐼𝑚𝑓 is a complete lattice. Suppose now that 𝐴 ⊆ 𝐼𝑚𝑓. If 𝑎 = 𝐼𝑛𝑓𝐿 𝐴, then from
above we have 𝑎 = 𝑓(𝑎) ∈ 𝐼𝑚𝑓.If now 𝑦 ∈ 𝑖𝑚𝑓 is such that 𝑦 ≤ 𝑥 for every 𝑥 ∈ 𝐴 then 𝑦 ≤
𝑎. Consequently we have 𝑎 = 𝑖𝑛𝑓𝐼𝑚 𝑓 𝐴. Therefore 𝐼𝑛𝑓𝐼𝑚𝑓 𝐴 = 𝐼𝑛𝑓𝐿 𝐴.
Now let 𝑆𝑢𝑝𝐿𝐴 = 𝑏 and 𝑏 ∗ = 𝑆𝑢𝑝𝐼𝑚 𝑓 𝐴. We show that 𝑏 ∗ = 𝑓(𝑏). Since 𝐼𝑚𝑓 is complete,
we have 𝑏 ∗ ∈ 𝐼𝑚𝑓so𝑏 ∗ = 𝑓(𝑏 ∗ ) and since 𝑏 ∗ ≥ 𝑥 for every 𝑥 ∈ 𝐴 we have 𝑏 ∗ ≥ sup𝐿 𝐴 =
𝑏.Thus by using the fact that 𝑓 is isotone, we have 𝑏 ∗ = 𝑓(𝑏 ∗ ) ≥ 𝑓(𝑏). But 𝑏 ≥ 𝑥 for every𝑥 ∈
𝐴. Therefore by isotonicity of 𝑓 we have 𝑓(𝑏) ≥ 𝑓(𝑥) = 𝑥 for all 𝑥 ∈ 𝐴 (since 𝑥 ∈ 𝐼𝑚𝑓) so
we also have 𝑓(𝑏) ≥ 𝑠𝑢𝑝𝐼𝑚𝑓 𝐴= 𝑏 ∗ . This implies 𝑏 ∗ = 𝑓(𝑏) as asserted.

Definition 2.5.17: Given ordered sets 𝐸, 𝐹 and antitone mapping 𝑓: 𝐸 → 𝐹 and 𝑔: 𝐹 → 𝐸, we


say that the pair (𝑓, 𝑔) establishes a Galois connection between 𝐸 and 𝐹 if 𝑓𝑔 ≥ 𝑖𝑑𝐹 and
𝑔𝑓 ≥ 𝑖𝑑𝐸 .

Remark 2.5.18: We now proceed to describe an important application of above theorem. For
this purpose, given an ordered set 𝐸 consider the mapping 𝜗: 𝑃(𝐸) → 𝑃(𝐸) given by 𝜗(𝐴) =
𝐴↓ and the mapping 𝜑: 𝑃(𝐸) → 𝑃(𝐸) by 𝜑(𝐴) = 𝐴↑ . If 𝐴, 𝐵 ∈ 𝑃(𝐸) with 𝐴 ⊆ 𝐵 then clearly
every lower bound of 𝐵 is a lower bound of 𝐴 whence 𝐵 ↓ ⊆ 𝐴↓ . Hence 𝜗 is antitone. Dually,
so is 𝜑. Now every element of 𝐴 is clearly a lower bound of the set of upper bounds of 𝐴
whence 𝐴 ⊆ 𝐴↑↓ and therefore id𝑃(𝐸) ≤ 𝜗𝜑. Dually every element of A is an upper bound of
the set of lower bounds of 𝐴 so 𝐴 ⊆ 𝐴↑↓ and therefore id𝑃(𝐸) ≤ 𝜑𝜗. Consequently we see that
(𝜗, 𝜑) establish Galois connection on 𝑃(𝐸) we shall focus on the associated closure𝐴 ⟼
𝐴↑↓ for this purpose we shall also require the following facts.

Theorem 2.5.19: Let 𝐸 be an ordered set. If (𝐴𝛼 )𝛼∈𝐼 is a family of subsets of 𝐸 then;
(⋃𝛼∈𝐼 𝐴𝛼 )↑ = ⋂𝛼∈𝐼 𝐴↑𝛼 and (⋃𝛼∈𝐼 𝐴𝛼 )↓ = ⋂𝛼∈𝐼 𝐴↓𝛼 .

Proof: Since each 𝐴𝛼 is contained in ⋃𝛼∈𝐼 𝐴𝛼 and 𝐴 ⟼ 𝐴↑ is antitone, we have that


(⋃𝛼∈𝐼 𝐴𝛼 )↑ ⊆ ⋂𝛼∈𝐼 𝐴↑𝛼 . To prove the reverse inclusion, observe that if 𝑥 ∈ ⋂𝛼∈𝐼 𝐴↑𝛼 then 𝑥 is
an upper bound of 𝐴𝛼 for every 𝛼 ∈ 𝐼. Hence 𝑥 is an upper bound of ⋃𝛼∈𝐼 𝐴𝛼 and therefore
belongs to (⋃𝛼∈𝐼 𝐴𝛼 )↑ .
To prove other part, we have 𝐴𝛼 ⊆ ∪𝛼∈𝐼 𝐴𝛼 and 𝐴 ⟼ 𝐴↓ is antitone, so we have that
(⋃𝛼∈𝐼 𝐴𝛼 )↓ ⊆∩𝛼∈𝐼 𝐴↓𝛼 . To prove reverse inclusion, suppose that 𝑥 ∈ ⋂𝛼∈𝐼 𝐴↓𝛼 , then 𝑥 is the
lower bound of 𝐴𝛼 for every 𝛼 ∈ 𝐼, this implies 𝑥 is the lower bound of ∪𝛼∈𝐼 𝐴𝛼 , which implies
that 𝑥 ∈ (∪𝛼∈𝐼 𝐴𝛼 )↓ . Thus (⋃𝛼∈𝐼 𝐴𝛼 )↓ = ⋂𝛼∈𝐼 𝐴↓𝛼 .

Definition 2.5.20: By an embedding of an ordered set 𝐸 into a lattice 𝐿 we mean a mapping


𝑓: 𝐸 → 𝐿 such that for all 𝑥, 𝑦 ∈ 𝐸, 𝑥 ≤ 𝑦 if and only if 𝑓(𝑥) ≤ 𝑓(𝑦).

Theorem 2.5.21: (Dedekind-MacNeille) Ever𝑦 ordered set 𝐸 can be embedded in a complete


lattice 𝐿 in such a way that meets and joins that exist in 𝐸 are preserved in 𝐿.

Proof : Since 𝐸 is an ordered set, so if 𝐸 does not have a top element or bottom element we
begin by adjoining whichever of these bounds is missing. Then without loss of generality we
may assume that 𝐸 is a bounded ordered set.
Let 𝑓: 𝑃(𝐸) → 𝑃(𝐸) be the closure mapping given by 𝑓(𝐴) = 𝐴↑↓ .Then by previous theorem
𝐿 = 𝐼𝑚𝑓 is a complete lattice. We have 𝑓({𝑥}) = {𝑥}↑↓ = 𝑥 ↓ for all 𝑥 ∈ 𝐸. Now suppose
that𝑥 ≤ 𝑦 and let 𝑧 ∈ 𝑥 ↓ , if and only if 𝑧 ≤ 𝑥; if and only if 𝑧 ≤ 𝑦,if and only if 𝑧 ∈ 𝑦 ↓ . So 𝑥 ≤
𝑦if and only if 𝑥 ↓ ⊆ 𝑦 ↓ if and only if 𝑓({𝑥})=𝑓({𝑦}). It follows that 𝑓 induces an embedding
𝑓′: 𝐸 → 𝐿 given by𝑓 ′ (𝑥) = 𝑓({𝑥}) = {𝑥}↑↓ = 𝑥 ↓ . Suppose now that 𝐴 = {𝑥𝛼 : 𝛼 ∈ 𝐼} ⊆ 𝐸. If
𝑎 = ⋀𝛼∈𝐼 𝑥𝛼 then clearly 𝑎↓ = ⋂𝛼∈𝐼 𝑥𝛼↓ so that 𝑓´(𝑎) = ⋂𝛼∈𝐼 𝑓´(𝑥𝛼 ), that is existing infima are
preserved.
Suppose now that 𝑏 = ⋁𝛼∈𝐼 𝑥𝛼 exists. Since 𝑦 ≥ 𝑏;
if and only if for all 𝑦 ∈ 𝑥𝛼↑ = {𝑥𝛼 }↑↓↑ ;

if and only if 𝑦 ∈ ⋂𝛼∈𝐼{𝑥𝛼 }↑↓↑ = (⋃𝛼∈𝐼{𝑥𝛼 }↑↓ ) ; (by Theorem 2.5.19)

we see that 𝑏 ↑ = (⋃𝛼∈𝐼{𝑥𝛼 }↑↓ ) . Consequently,
↑↓
𝑓´(𝑏) = {𝑏}↑↓ = (⋃𝛼∈𝐼{𝑥𝛼 }↑↓ )
= 𝑓(𝑠𝑢𝑝 ℙ(𝐸) {{𝑥𝛼 }↑↓ |𝛼 ∈ 𝐼}
= 𝑠𝑢𝑝𝐼𝑚 𝑓 {{𝑥𝛼 }↑↓ | 𝛼 ∈ 𝐼} (by Theorem 2.16)
= sup𝐼𝑚𝑓 {𝑓´(𝑥𝛼 ) | 𝛼 ∈ 𝐼}.
So that existing suprema are also preserved.

Definition 2.5.22: The complete lattice 𝐿 = 𝐼𝑚𝑓 = {𝐴↑↓ |𝐴 ∈ 𝑃(𝐸)} in the (Theorem 2.5.20)
is called the Dedekind-MacNeille completion of 𝐸.
Chapter 3

MODULAR, DISTRIBUTIVE AND COMPLEMENTED LATTICES


We describe a special class of lattices called modular lattices. Modular lattices are numerous
in mathematics; Distributive lattices are a special class of modular lattices. In this chapter we
first introduce both modular and distributive lattices to show the relationship between them.
Later, we focus on modular lattices. Distributive lattices are considered in detail. In the last
section of this chapter we introduce complemented lattice and the lattice with unique
complements, then we show that when a uniquely complemented lattice is distributive. Before
formally introducing modular lattices and distributive lattices we prove the following lemma’s
which will put the definitions into perspective.

3.1 Modular lattices


Lemma 3.1.1: Let 𝐿 be a lattice and let 𝑎, 𝑏, 𝑐 ∈ 𝐿 then:
(1) 𝑎 ∧ (𝑏 ∨ 𝑐) ≥ (𝑎 ∧ 𝑏) ∨ (𝑎 ∧ 𝑐) and 𝑎 ∨ (𝑏 ∧ 𝑐) ≤ (𝑎 ∨ 𝑏) ∧ (𝑎 ∨ 𝑐);
(2) 𝑎 ≥ 𝑐 implies (𝑎 ∧ 𝑏 ) ∨ 𝑐 ≥ 𝑎 ∧ ( 𝑏 ∨ 𝑐)and (𝑎 ∨ 𝑏) ∧ 𝑐 ≤ 𝑎 ∨ (𝑏 ∧ 𝑐).

Proof: (1) we have 𝑎 ≥ (𝑎 ∧ 𝑏) and (𝑏 ∨ 𝑐) ≥ 𝑏 ≥ (𝑏 ∧ 𝑐) therefore;


𝑎 ∧ (𝑏 ∨ 𝑐) ≥ (𝑎 ∧ 𝑏) ∨ (𝑏 ∧ 𝑐).
Also 𝑎 ≤ 𝑎 ∨ 𝑏 and (𝑏 ∧ 𝑐 ) ≤ 𝑐 ≤ (𝑎 ∨ 𝑐), which implies 𝑎 ∨ (𝑏 ∧ 𝑐) ≤ (𝑎 ∨ 𝑏) ∧
(𝑎 ∨ 𝑐) which proves (1).
2) If 𝑎 ≥ 𝑐, then by connecting lemma 𝑎 ∧ 𝑐 = 𝑐 thus from (1) we get;
(𝑎 ∧ 𝑏) ∨ 𝑐 = (𝑎 ∧ 𝑏) ∨ (𝑎 ∧ 𝑐) ≤ 𝑎 ∧ ( 𝑏 ∨ 𝑐).
Thus 𝑎 ≥ 𝑐 implies (𝑎 ∧ 𝑏 ) ∨ 𝑐 ≥ 𝑎 ∧ ( 𝑏 ∨ 𝑐).
In a similar way we can show that 𝑎 ≤ 𝑐 implies 𝑎 ∨ (𝑏 ∧ 𝑐 ) ≤ ( 𝑎 ∨ 𝑏) ∧ 𝑐 that proves
(2).

Lemma 3.1.2: Let L be a lattice then the following are equivalent:


(1) (for all 𝑎 , 𝑏 , 𝑐 ∈ 𝐿) 𝑎 ≥ 𝑐 implies 𝑎 ∧ (𝑏 ∨ 𝑐) = (𝑎 ∧ 𝑏) ∨ 𝑐;
(2) (for all 𝑎 , 𝑏 , 𝑐 ∈ 𝐿) 𝑎 ≥ 𝑐 implies 𝑎 ∧ (𝑏 ∨ 𝑐) = (𝑎 ∧ 𝑏) ∨ (𝑎 ∧ 𝑐);
(3) (for all 𝑝, 𝑞, 𝑟 ∈ 𝐿) 𝑝 ∧ (𝑞 ∨ (𝑝 ∧ 𝑟)) = (𝑝 ∧ 𝑞) ∨ (𝑝 ∧ 𝑟).
Proof: (1) ⇒ (2): Suppose that (1) holds then 𝑎 ≥ 𝑐 gives 𝑎 ∧ 𝑐 = 𝑐 (by connecting lemma).
Therefore 𝑎 ∧ (𝑏 ∨ 𝑐) = (𝑎 ∧ 𝑏) ∨ (𝑎 ∧ 𝑐). Thus (1) ⇒ (2) holds.
(2) ⇒ (1): If 𝑎 ≥ 𝑐 then 𝑎 ∧ 𝑐 = 𝑐; therefore by (2) 𝑎 ∧ (𝑏 ∨ 𝑐) = (𝑎 ∧ 𝑏) ∨ 𝑐 so that (2)
⇒(1) holds.
(2) ⇒ (3):To prove that (2) ⇒ (3); we put 𝑎 = 𝑝, 𝑏 = 𝑞, 𝑐 = 𝑝 ∧ 𝑟 in (3).
We get for all 𝑝, 𝑞, 𝑟 ∈ 𝐿, 𝑝 ∧ (𝑞 ∨ (𝑝 ∧ 𝑟)) = (𝑝 ∧ 𝑞) ∨ (𝑝 ∧ (𝑝 ∧ 𝑟)) = (𝑝 ∧ 𝑞) ∨ (𝑝 ∧
𝑟).
Thus (2) ⇒ (3) is true.
(3) ⇒(2): Assume 𝑎 ≥ 𝑐 and applying (3) with 𝑝 = 𝑎, 𝑞 = 𝑏, and 𝑟 = 𝑐, we get;
𝑎 ∧ (𝑏 ∨ (𝑎 ∧ 𝑐)) = (𝑎 ∧ 𝑏) ∨ (𝑎 ∧ 𝑐).
Which implies 𝑎 ∧ (𝑏 ∨ 𝑐) = (𝑎 ∧ 𝑏) ∨ (𝑎 ∧ 𝑐). Thus (3) ⇒ (2) is true. Which proves the
lemma.
The next lemma shows that the operation of meet and join distributes over each other.

Lemma 3.1.3: let 𝐿 be a lattice then the following are equivalent:


(1) (for all 𝑎 , 𝑏, 𝑐 ∈ 𝐿) 𝑎 ∧ (𝑏 ∨ 𝑐) = (𝑎 ∧ 𝑏) ∨ (𝑎 ∧ c);
(2) (for all 𝑝, 𝑞, 𝑟 ∈ 𝐿 ) 𝑝 ∨ (𝑞 ∧ 𝑟) = (𝑝 ∨ 𝑞) ∧ (𝑝 ∨ 𝑟).

Proof : Assume that ( 1) holds , then for all 𝑝, 𝑞, 𝑟 ∈ 𝐿 we have;


(𝑝 ∨ 𝑞) ∧ (𝑝 ∨ 𝑟) = ((𝑝 ∨ 𝑞) ∧ 𝑝) ∨ (( 𝑝 ∨ 𝑞) ∧ 𝑟) (using (1))
= 𝑝 ∨ ((𝑝 ∨ 𝑞) ∧ 𝑟) (since (𝑝 ∨ 𝑞) ≥ 𝑝 implies(𝑝 ∨ 𝑞) ∧ 𝑝 = 𝑝)
= 𝑝 ∨ (𝑟 ∧ ( 𝑝 ∨ 𝑞) (because the meet operation is commutative)
= 𝑝 ∨ (( 𝑝 ∧ 𝑟) ∨ (𝑞 ∧ 𝑟)) (using (1))
= (𝑝 ∨ (𝑝 ∧ 𝑟)) ∨ (𝑞 ∧ 𝑟) (since operation of join is
associative)
= 𝑝 ∨ (𝑞 ∧ 𝑟) (since 𝑝 ≥ 𝑝 ∧ 𝑟).
Conversely suppose that (2) holds then for all 𝑎, 𝑏, 𝑐 ∈ 𝐿 we have;
(𝑎 ∧ 𝑏) ∨ ( 𝑎 ∧ 𝑐) = (( 𝑎 ∧ 𝑏) ∨ 𝑎) ∧ (( 𝑎 ∧ 𝑏) ∨ 𝑐) (by (2))
Now by using (2) and the fact that 𝑎 ∧ 𝑏 ≤ 𝑎 we get;
(𝑎 ∧ 𝑏) ∨ ( 𝑎 ∧ 𝑐) = 𝑎 ∧ ( 𝑐 ∨ 𝑎) ∧ ( 𝑐 ∨ 𝑏 )
= (𝑎 ∧ (𝑐 ∨ 𝑎)) ∧ (𝑐 ∨ 𝑏) (since meet operation is
associative)
= 𝑎 ∧ (𝑏 ∨ 𝑐) (since 𝑎 ≤ 𝑎 ∨ 𝑐).
Thus (𝑎 ∧ 𝑏) ∨ ( 𝑎 ∧ 𝑐) = 𝑎 ∧ ( 𝑏 ∨ 𝑐).

Definition 3.1.4: Let 𝐿 be a lattice, then 𝐿 is said to be modular if it satisfies the modular law
that is;
for all 𝑎, 𝑏, 𝑐, ∈ 𝐿 𝑎 ≥ 𝑐 implies 𝑎 ∧ (𝑏 ∨ 𝑐) = (𝑎 ∧ 𝑏) ∨ 𝑐
or
for all 𝑎, 𝑏. 𝑐, ∈ 𝐿 𝑎 ≤ 𝑐 implies 𝑎 ∨ (𝑏 ∧ 𝑐 ) = (𝑎 ∨ 𝑏) ∧ 𝑐.

Remarks 3.1.5: From Lemma 3.1.1 we have 𝑎 ∧ (𝑏 ∨ 𝑐) ≥ (𝑎 ∧ 𝑏) ∨ (𝑎 ∧ 𝑐) and 𝑎 ≥


𝑐 implies (𝑎 ∧ 𝑏 ) ∨ 𝑐 ≥ 𝑎 ∧ ( 𝑏 ∨ 𝑐) which are known as modular and distributive
inequalities. This shows that any lattice is “half way” to being both modular and distributive.
To establish modularity or distributivity, we only need to check only one side inequality.
We now first define two special lattices, the diamond lattice and the pentagon lattice. The
diamond lattice 𝑀3 is shown below:

𝑀3 is modular, it however not satisfies distributive law. To see this note that in the diagram of
𝑀3 we have;
𝑎 ∧ (𝑏 ∨ 𝑐) = 𝑎 ∧ 1 = 1 and (𝑎 ∧ 𝑏) ∨ (𝑎 ∧ 𝑐) = 0 ∨ 0 = 0 and 𝑎 ≠ 0 so 𝑀3 is not
modular. The smallest lattice which is not modular is pentagon (𝑁5 ) shown below;

In above figure 𝑎 ≥ 𝑐 holds however 𝑎 ∧ (𝑏 ∨ 𝑐) = 𝑎 ∧ 1 = 1 but (𝑎 ∧ 𝑏 ) ∨ 𝑐 = 0 ∨ 𝑐 =


𝑐.
So 𝑁5 is not modular.

The following lemma is useful in proving the next theorem.

Lemma 3.1.6: For all lattices 𝐿 and for all 𝑎, 𝑏, 𝑐 ∈ 𝐿 let 𝑣 = 𝑎 ∧ (𝑏 ∨ 𝑐) and 𝑢 = (𝑎 ∧ 𝑏) ∨
𝑐 . Then 𝑣 > 𝑢 implies 𝑣 ∧ 𝑏 = 𝑢 ∧ 𝑏 and𝑣 ∨ 𝑏 = 𝑢 ∨ 𝑏.

Proof : We have (𝑎 ∧ 𝑏) = (𝑎 ∧ 𝑏) ∧ 𝑏 (because (𝑎 ∧ 𝑏) ≤ 𝑏)


≤ [(𝑎 ∧ 𝑏) ∨ 𝑐] ∧ 𝑏 (because (𝑎 ∧ 𝑏) ∨ 𝑐 ≥ (𝑎 ∧ 𝑏))
= 𝑢 ∧𝑏 ≤ 𝑣∧ 𝑏 (since 𝑢 < 𝑣)
= [𝑎 ∧ (𝑏 ∨ 𝑐)] ∧ 𝑏 (by assumption)
= 𝑎 ∧ [(𝑏 ∨ 𝑐) ∧ 𝑏] (since operation of meet is associative)
=𝑎∧𝑏 (since (𝑏 ∨ 𝑐) ∧ 𝑏 = 𝑏).
Thus 𝑣 ∧ 𝑏 = 𝑢 ∧ 𝑏. Likewise we can prove that 𝑣 ∨ 𝑏 = 𝑢 ∨ 𝑏.

Proposition 3.1.7: Every sublattice of a modular lattice is modular.

Proof: Let 𝑀 be a sublattice of a modular lattice 𝐿. Let 𝑎, 𝑏, 𝑐 ∈ 𝑀 then 𝑎, 𝑏, 𝑐 ∈ 𝐿; since 𝐿 is


modular therefore if 𝑎 ≥ 𝑐 then 𝑎 ∧ (𝑏 ∨ 𝑐) = (𝑎 ∧ 𝑏) ∨ 𝑐 for all 𝑎, 𝑏, 𝑐 ∈ 𝑀 . Thus 𝑀 is
modular.
Example 3.1.8: The lattice 𝐿(𝑉) of subspaces of a vector space 𝑉 is modular , since it is a
sublattice of a modular lattice of a modular lattice of (normal) subgroups of the additive group
of 𝑉.

Theorem 3.1.9: (𝑁5 Theorem) A lattice 𝐿 is modular if and only if it does not contain a
sublattice isomorphic to 𝑁5 .

Proof : If 𝐿 contains 𝑁5 then it clearly violates modularity as 𝑁5 is not modular.


Now assume to the contrary that 𝐿 is not modular, then there exists 𝑎, 𝑏, 𝑐 ∈ 𝐿 such that 𝑎 ≥
𝑐 and 𝑎 ∧ (𝑏 ∨ 𝑐) > (𝑎 ∧ 𝑏) ∨ 𝑐. Clearly 𝑏 ∥ 𝑎 and 𝑏 ∥ 𝑐, for if 𝑏 ≤ 𝑎 then 𝑎 ∧
(𝑏 ∨ 𝑐) ≥ 𝑏 ∨ 𝑐 ≥ 𝑎 ∧ (𝑏 ∨ 𝑐); which is a contradiction. The other case is similar. We let
𝑣 = 𝑎 ∧ (𝑏 ∨ 𝑐) and 𝑢 = (𝑎 ∧ 𝑏 ) ∨ 𝑐 then 𝑣 > 𝑢 .We claim that 𝑣 ∥ 𝑏 and 𝑏 ∥ 𝑢. As
𝑏 ≤ 𝑣 implies 𝑏 ≤ [𝑎 ∧ ( 𝑏 ∨ 𝑐)]; this gives 𝑏 ≤ 𝑎. Thus by lemma (3.1.5) 𝑢 ∨ 𝑏 = 𝑣 ∨
𝑏 and 𝑢 ∧ 𝑏 = 𝑣 ∧ 𝑏. Thus 𝑢, 𝑣, 𝑏, 𝑢 ∨ 𝑏, and 𝑢 ∧ b forms 𝑁5 , given below;

Example 3.1.10: The set of normal subgroups of any group 𝐺 forms a modular lattice.

Proof: Let 𝒩-Sub 𝐺 denotes the set of normal subgroups of 𝐺.It is trivial to show that 𝒩-Sub
𝐺 is a poset under set containment. We first show that 𝒩-Sub forms a lattice with;
𝐺 1 ∧ 𝐺2 = 𝐺1 ⋂ 𝐺2 and
𝐺1 ∨ 𝐺2 = {𝑔1 𝑔2 : 𝑔1 ∈ 𝐺1 and 𝑔2 ∈ 𝐺2 }.
Where 𝐺1 , 𝐺2 are any two members of 𝒩-Sub 𝐺. Now for all 𝑥 ∈ 𝐺1 ∩ 𝐺2 and for all 𝑔 ∈
𝐺, we have 𝑥 ∈ 𝐺1 and 𝑔 ∈ 𝐺 implies 𝑔𝑥𝑔−1 ∈ 𝐺1 and 𝑥 ∈ 𝐺2 and 𝑔 ∈
−1
𝐺 implies 𝑔𝑥𝑔 ∈ 𝐺2 .
Thus for all 𝑥 ∈ 𝐺1 ⋂ 𝐺2 and for all 𝑔 ∈ 𝐺 we have 𝑔𝑥𝑔−1 ∈ 𝐺1 ⋂ 𝐺2 . Also 𝐺1 ⋂ 𝐺2 ⊆ 𝐺1 and
𝐺1 ∩ 𝐺2 ⊆ 𝐺2 ; therefore 𝐺1 ⋂ 𝐺2 is the lower bound of 𝐺1 and 𝐺2 . Let W be any other lower
bound of 𝐺1 and 𝐺2 then 𝑊 ⊆ 𝐺1 and 𝑊 ⊆ 𝐺2 which implies 𝑊 ⊆ 𝐺1 ∩ 𝐺2 . Therefore 𝐺1 ∧
𝐺2 = 𝐺1 ⋂ 𝐺2 . Now let 𝑔 ∈ 𝐺 and 𝑥 ∈ 𝐺1 𝐺2 then 𝑥 = 𝑔1 𝑔2 for some 𝑔1 ∈ 𝐺1 and 𝑔2 ∈ 𝐺2 .
Thus 𝑔𝑥𝑔−1 = 𝑔(𝑔1 𝑔2 )𝑔−1 = (𝑔𝑔1 𝑔−1 )(𝑔𝑔2 𝑔−1 ) ∈ 𝐺1 𝐺2 ; thus 𝐺1 𝐺2 ∈ 𝒩-Sub𝐺. This
proves the required claim. To prove that 𝒩-𝑆𝑢𝑏 𝐺 is a modular lattice. We shall show that for
𝐺1 , 𝐺2 𝑖𝑛 𝒩- Sub 𝐺 such that 𝐺2 ⊆ 𝐺1 implies 𝐺1 ⋂(𝐺2 𝐺3 ) = 𝐺1 (𝐺2 ⋂𝐺3 ).
Clearly 𝐺2 (𝐺1 ⋂𝐺3 ) ⊆ 𝐺1 ⋂(𝐺2 𝐺3 ) (since every lattice satisfies modular inequality)
To prove the reverse inclusion let 𝑥 ∈ 𝐺1 ⋂(𝐺2𝐺3). This gives 𝑥 ∈ 𝐺 1 and 𝑥 ∈ 𝐺 2𝐺3 which
implies that 𝑥 = 𝑔1 and 𝑥 = 𝑔2 𝑔3 for some 𝑔1 ∈ 𝐺 1, 𝑔2 ∈ 𝐺 2 and 𝑔3 ∈ 𝐺 3. Thus 𝑔1 = 𝑔2 𝑔3 ∈
𝐺 1 which implies 𝑔3 = 𝑔2 -1𝑔1 ∈ 𝐺 1. Also 𝑔3 ∈ 𝐺 3; hence 𝑔3 ∈ 𝐺 1⋂𝐺3 , so 𝑥 = 𝑔2 𝑔3 ∈
𝐺 2(𝐺 1⋂ 𝐺 3) therefore 𝐺 1 ∩ (𝐺 2𝐺 3) = 𝐺 2(𝐺 1⋂𝐺3), showing that 𝒩-Sub𝐺 is modular.

3.2. Some important results on Modular Lattices.


In this section we provide some results which gives necessary and sufficient condition for a
lattice 𝐿 to be modular.

Theorem 3.2.1: A lattice 𝐿 is modular if and only if the ideal lattice 𝔗(𝐿) is modular.

Proof: suppose 𝐿 is modular then every sublattice of 𝐿 is also modular. Since 𝔗 (𝐿) the set of
ideals of 𝐿 is a sublattice of 𝐿, therefore it is modular. Conversely suppose to the contrary that
𝐿 is not modular. Let 𝑎, 𝑏, 𝑐 ∈ 𝐿 , 𝑎 ≤ 𝑏 and let (𝑎 ∧ 𝑐) ∨ 𝑏 + 𝑎 ∧ (𝑐 ∨ 𝑏) be the lattice
generated by 𝑎, 𝑏, 𝑐, with 𝑎 ≥ 𝑏. Therefore the sublattice 𝔗(𝐿) of 𝐿 generated by 𝑎, 𝑏 and 𝑐
must be homomorphic image of pentagon. As if any two of the five elements 𝑎 ∧ 𝑐, 𝑎 ∨ 𝑐, 𝑎 ∧
(𝑏 ∨ 𝑐), 𝑏 ∨ 𝑐 and 𝑐 are identified under a homomorphism, then so are (𝑎 ∧ 𝑐) ∨ 𝑏 and 𝑎 ∧
(𝑏 ∨ 𝑐). Consequently, the five elements are distinct in 𝐿 and they form a pentagon. Which is
contradict- ion showing that 𝐿 is modular lattice.

Theorem 3.2.2: (Shearing Identity) A lattice 𝐿 is modular if and only if for all 𝑥, 𝑦, 𝑧 ∈ 𝐿
𝑥 ∧ (𝑦 ∨ 𝑧) = 𝑥 ∧ [(𝑦 ∧ (𝑥 ∨ 𝑧) ∨ 𝑧] (4)

Proof: We first prove that modularity implies shearing; We use the fact that 𝑥 ∨ 𝑧 ≥ 𝑧.
We have [𝑦 ∧ (𝑥 ∨ 𝑧)] ∨ 𝑧 = 𝑧 ∨ (𝑦 ∧ (𝑥 ∨ 𝑧) (since join operation is commutative)
= (𝑧 ∨ 𝑦) ∧ (𝑥 ∨ 𝑧) (by modularity of 𝐿)
= (𝑦 ∨ 𝑧) ∧ (𝑥 ∨ 𝑧) (since join operation is commutative).
Now consider the RHS of (4)
𝑥 ∧ [(𝑦 ∧ (𝑥 ∨ 𝑧) ∨ 𝑧] = 𝑥 ∧ [ (𝑦 ∨ 𝑧) ∧ (𝑥 ∨ 𝑧)] = 𝑥 ∧ (𝑦 ∨ 𝑧) (by using modularity)
and the fact that 𝑥 ∨ 𝑧 ≥ 𝑧. Hence (1) holds.
Conversely suppose that 𝐿 satisfies shearing identity. We show that 𝐿 is modular.
Let 𝑥, 𝑦, 𝑧 ∈ 𝐿 then 𝑥 ∧ (𝑦 ∨ 𝑧) ≥ (𝑥 ∧ 𝑦) ∨ 𝑧 holds for all lattices (by modular inequality).
Now we have 𝑥 ∧ (𝑦 ∨ 𝑧) = 𝑥 ∧ [(𝑦 ∧ (𝑥 ∨ 𝑧) ∨ 𝑧] (using (4))
≤ [(𝑦 ∧ (𝑥 ∨ 𝑧) ∨ 𝑧] (by using 𝑥 ∧ 𝑦 ≤ 𝑥)
= [𝑦 ∧ (𝑥 ∨ (𝑧 ∨ 𝑧)) (since join operation is associative)
≤ 𝑦 ∧ (𝑥 ∨ 𝑧) (using modular inequality)
= (𝑥 ∧ 𝑦) ∨ 𝑧 (since meet operation is commutative).
Thus 𝑥 ∧ (𝑦 ∨ 𝑧) = (𝑥 ∧ 𝑦) ∨ 𝑧 and so 𝐿 is modular.

Proposition 3.2.3: A lattice 𝐿 is modular if and only if for all 𝑥, 𝑦, 𝑧 ∈ 𝐿;


𝑥 ∨ (𝑦 ∧ (𝑥 ∨ 𝑧)) = (𝑥 ∨ 𝑦) ∧ (𝑥 ∨ 𝑧) (5)

Proof: Suppose that 𝐿 is modular. Since 𝐿 is a lattice; for all 𝑥, 𝑦, 𝑧 ∈ 𝐿 we have 𝑥 ∨ 𝑧 ∈ 𝐿


so by using modularity of 𝐿 with 𝑥 ≤ 𝑥 ∨ 𝑧. We have 𝑥 ∨ (𝑦 ∧ (𝑥 ∨ 𝑧)) = (𝑥 ∨ 𝑦) ∧ (𝑥 ∨ 𝑧).
Conversely suppose that (5) holds. Put 𝑥 ∨ 𝑧 = 𝑡 ∈ 𝐿 in (1); clearly 𝑥 ≤ 𝑡. Thus for all
𝑥, 𝑦, 𝑡 ∈ 𝐿 with 𝑥 ≤ 𝑡, we have from (1) 𝑥 ∨ ( 𝑦 ∧ 𝑡) = (𝑥 ∨ 𝑦) ∧ 𝑡 which shows that 𝐿 is
modular.
Proposition 3.2.4: A lattice 𝐿 is modular if and only if;
for all 𝑥, 𝑦, 𝑧 ∈ 𝐿 {𝑥 ∨ [ 𝑦 ∧ (𝑥 ∨ 𝑧) ] ∧ 𝑧} = (𝑥 ∨ 𝑦) ∧ 𝑧 (6)

Proof: Suppose that 𝐿 is modular. We have{𝑥 ∨ [ 𝑦 ∧ (𝑥 ∨ 𝑧)] = (𝑥 ∨ 𝑦) ∧ (𝑥 ∨ 𝑧)(by using


modularity of 𝐿). Consider LHS of (6) we have;
{𝑥 ∨ [ 𝑦 ∧ (𝑥 ∨ 𝑧)]} ∧ 𝑧 = 𝑧 ∧ [(𝑥 ∨ 𝑧) ∧ (𝑥 ∨ 𝑦)] (since meet operation is
commutative)
= [𝑧 ∧ (𝑥 ∨ 𝑧)] ∧ (𝑥 ∨ 𝑦) (by modularity)
= 𝑧 ∧ (𝑥 ∨ 𝑦) (since 𝑧 ≤ 𝑥 ∨ 𝑧)
= (𝑥 ∨ 𝑦) ∧ 𝑧 (since meet operation is commutative.
Thus {𝑥 ∨ [ 𝑦 ∧ (𝑥 ∨ 𝑧) ]} ∧ 𝑧 = (𝑥 ∨ 𝑦) ∧ 𝑧, which is (6).
Conversely suppose that (6) holds. If 𝑥, 𝑦 , 𝑧 ∈ 𝐿 with 𝑧 ≥ 𝑥 then by (Lemma 3.1.1) we have
𝑧 ∧ (𝑦 ∨ 𝑥) ≥ ( 𝑧 ∧ 𝑦) ∨ 𝑥 holds for all lattices. Also for 𝑧 ≥ 𝑥 we have from (6);
𝑧 ∧ (𝑦 ∨ 𝑥) = 𝑧 ∧ { ( 𝑦 ∧ (𝑥 ∨ 𝑧) ∨ 𝑥} (by assumption)
= 𝑧 ∧ {(𝑦 ∧ 𝑧) ∨ 𝑥} (since 𝑧 ≥ 𝑥)
≤ (𝑧 ∧ 𝑦 ) ∨ 𝑥 (since 𝑎 ∧ 𝑏 ≤ 𝑏).
This gives 𝑧 ∧ (𝑦 ∨ 𝑥) = (𝑧 ∧ 𝑦 ) ∨ 𝑥. Thus 𝐿 is modular.

Theorem 3.2.5: A lattice 𝐿 is modular if and only if whenever 𝑎 ≥ b and 𝑎 ∧ 𝑐 = 𝑏 ∧ c and


𝑎 ∨ 𝑐 = 𝑏 ∨ 𝑐 for some 𝑐 ∈ 𝐿 then 𝑎 = 𝑏.

Proof: Let 𝐿 be a modular lattice and let 𝑎, 𝑏, 𝑐 ∈ 𝐿 such that 𝑎 ≥ 𝑏,𝑎 ∨ 𝑐 = 𝑏 ∨ 𝑐 and𝑎 ∧
𝑐 = 𝑏 ∧ 𝑐. Then:
𝑎 = 𝑎 ∧ (𝑎 ∨ 𝑐) (since 𝑎 ≤ 𝑎 ∨ 𝑐)
= 𝑎 ∧ (𝑏 ∨ 𝑐 ) (because 𝑎 ∨ 𝑐 = 𝑏 ∨ 𝑐)
= 𝑎 ∧ (𝑐 ∨ 𝑏) (since join operation is associative)
= (𝑎 ∧ 𝑐) ∨ 𝑏 (by modularity of 𝐿)
= (𝑏 ∧ 𝑐) ∨ 𝑏 (since 𝑎 ∧ 𝑐 = 𝑏 ∧ c)
=𝑏 (since (𝑏 ∧ 𝑐) ≤ 𝑏).
This gives 𝑎 = 𝑏.
Conversely suppose that 𝐿 is a lattice satisfying the conditions stated in the theorem. Let
𝑎, 𝑏, 𝑐 ∈ 𝐿 and 𝑎 ≥ 𝑏. We can easily verify the following relations and their duals.
𝑎 ∧ (𝑏 ∨ 𝑐) = 𝑎 ∧ (𝑐 ∨ 𝑏) ≥ 𝑏 ∨ (𝑎 ∧ 𝑐) and
(𝑎 ∧ (𝑏 ∨ 𝑐)) ∧ 𝑐 = (𝑎 ∧ (𝑎 ∨ 𝑐)) ∧ 𝑐 = 𝑎 ∧ 𝑐. (7)
Also 𝑎 ∧ 𝑐 = (𝑎 ∧ 𝑐) ∧ 𝑐 ≤ (𝑏 ∨ (𝑎 ∧ 𝑐)) ∧ 𝑐 = 𝑏 ∧ 𝑐 ≤ 𝑎 ∧ 𝑐. Hence
(𝑏 ∨ (𝑎 ∧ 𝑐)) ∧ 𝑐 = 𝑎 ∧ 𝑐. (8)
Since 𝑏 ≤ 𝑎 the dual of the relation (7) is (𝑏 ∨ (𝑎 ∧ 𝑐)) ∨ 𝑐 = 𝑏 ∨ 𝑐 and the dual of relation
(8) is (𝑎 ∧ (𝑏 ∨ 𝑐)) ∨ 𝑐 = 𝑏 ∨ 𝑐. Thus we have (𝑎 ∧ (𝑏 ∨ 𝑐)) ∧ 𝑐 = (𝑏 ∨ (𝑎 ∧ 𝑐)) ∧ 𝑐 and
(𝑎 ∧ (𝑏 ∨ 𝑐) ) ∧ 𝑐 = (𝑏 ∨ (𝑎 ∧ 𝑐)) ∨ 𝑐. Hence the assumed property implies that 𝑎 ∧
(𝑏 ∨ 𝑐) = 𝑏 ∨ (𝑎 ∧ 𝑐). So 𝐿 is modular.

3.3 Distributive lattice

In this section we introduce an important subclass of modular lattices, namely that of the
distributive lattices. These were the first to be considered in the earliest of the investigation.
Here we discuss various examples of distributive lattices, the criteria whether the lattice is
distributive and the important results which relates modular and distributive lattices.

Definition 3.3.1: A lattice 𝐿 is distributive if for all 𝑎, 𝑏, 𝑐 ∈ 𝐿;


𝑎 ∧ (𝑏 ∨ 𝑐) = (𝑎 ∧ 𝑏) ∨ (𝑎 ∧ 𝑐)
or
𝑎 ∨ ∧ 𝑐) = (𝑎 ∨ 𝑏) ∧ (𝑎 ∨ 𝑐).
(𝑏

Proposition 3.3.2: Every distributive lattice is modular.

Proof : Suppose that the lattice 𝐿 is distributive then for all 𝑎, 𝑏, 𝑐 ∈ 𝐿 𝑎 ∧ (𝑏 ∨ 𝑐) = (𝑎 ∧ 𝑏) ∨


(𝑎 ∧ 𝑐). If 𝑎 ≥ 𝑐 then 𝑎 ∧ 𝑐 = 𝑐. therefore for all 𝑎, 𝑏, 𝑐 ∈ 𝐿 with 𝑎 ≥ 𝑐 we have 𝑎 ∧ (𝑏 ∨ 𝑐) =
(𝑎 ∧ 𝑏) ∨ (𝑎 ∧ 𝑐) = (𝑎 ∧ 𝑏) ∨ 𝑐. Thus 𝐿 is modular.

The converse of above proposition is not true. For example, the diamond lattice given below is
already seen to be modular. We show that this lattice is not distributive.

Since 𝑥 ∧ (𝑦 ∨ 𝑧) = 𝑥 ∧ 1 = 𝑥 but (𝑥 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧) = 0 ∨ 0 = 0 and 𝑥 ≠ 0. Thus 𝑥 ∧


(𝑦 ∨ 𝑧) ≠(𝑥 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧).

Remark 3.3.3: Distributivity can be defined either by (1) or by (2) from (lemma 3.1.3). In other
words 𝐿 is distributive if and only if dual of (𝐿𝐷 ) is so. An application of duality principal shows
that 𝐿 is modular if and only if 𝐿D is so.

Proposition3.3.4: Every sublattice of a distributive lattice is distributive lattice.

Proof: Let 𝐿 be a distributive lattice and 𝑃 ⊆ 𝐿 be the sublattice of 𝐿. To show that 𝑃 is


distributive we suppose that 𝑎, 𝑏, 𝑐 ∈ 𝑃, then 𝑎, 𝑏, 𝑐 ∈ 𝐿. Since 𝐿 is distributive we have;
𝑎 ∧ (𝑏 ∨ 𝑐) = (𝑎 ∧ 𝑏) ∨ (𝑎 ∧ 𝑐) for all 𝑎, 𝑏, 𝑐 ∈ 𝑃 . Thus 𝑃 is a distributive lattice.
Example 3.3.5: (ℙ(𝐸);∩,∪, ⊆) is a distributive lattice.

Proof: We know that ℙ(𝐸) is a bounded lattice with top element 𝐸 and the bottom element ∅.
Let 𝘈, 𝘉, 𝘊 ∈ ℙ(𝐸) and let 𝑥 ∈ 𝐴 ∩ (𝐵 ∪ 𝐶);
if and only if 𝑥 ∈ 𝐴 and 𝑥 ∈ 𝐵 ∪ 𝐶
if and only if 𝑥 ∈ 𝐴 and 𝑥 ∈ 𝐵 or 𝑥 ∈ 𝐴 and 𝑥 ∈ 𝑐
if and only if 𝑥 ∈ (𝐴 ∩ 𝐵)or 𝑥 ∈ (𝐴 ∩ 𝐶)
if and only if 𝑥 ∈ (𝐴 ∩ 𝐵) ∪ (𝐴 ∩ 𝐶).
Thus 𝐴 ∩ (𝐵 ∪ 𝐶) = (𝐴 ∩ 𝐵) ∪ (𝐴 ∩ 𝐶). So ℙ(𝐸) is a distributive lattice.

Example 3.3.6: If 𝐸 is an ordered set then the lattice 𝒪(𝐸) of down sets of 𝐸 is a distributive
lattice.

Proof: Since 𝒪(𝐸) ⊆ ℙ(𝐸) is a sublattice of a ℙ(𝐸) and ℙ(𝐸) is a distributive lattice. Thus by
(Proposition 3.3.4) 𝒪(𝐸) is distributive.

Example 3.3.7: Any chain is a distributive lattice.

Proof: Let 𝒞 be any chain and let 𝑥, 𝑦, 𝑧 ∈ 𝒞. We have 𝑥 ∧ (𝑦 ∨ 𝑧) ≥ (𝑥 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧) holds


for all lattices. We only have to prove the reverse inequality. Here we have different cases to
consider.
Case 1: If 𝑥 ≤ 𝑦, 𝑧 then; 𝑥 ∧ 𝑦 = 𝑥, 𝑥 ∨ 𝑦 = 𝑦, 𝑥 ∧ 𝑧 = 𝑥 and 𝑥 ∨ 𝑧 = 𝑧 so, 𝑥 ∧ (𝑦 ∨ 𝑧) ≤ 𝑥 =
𝑥 ∨ 𝑥 = (𝑥 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧). Thus in this case,𝑥 ∧ (𝑦 ∨ 𝑧) = (𝑥 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧).
Case2: If 𝑥 ≥ 𝑦, 𝑧 then;
𝑥 ∧ 𝑦 = 𝑦, 𝑥 ∧ 𝑧 = 𝑧 , 𝑥 ∨ 𝑦 = 𝑥 and 𝑥 ∨ 𝑧 = 𝑥.
Now 𝑥 ∧ ∨ 𝑧) ≤ 𝑦 ∨ 𝑧 = (𝑥 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧. Thus 𝑥 ∧ (𝑦 ∨ 𝑧) = (𝑥 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧) holds in
(𝑦
this case also.
Case 3: If 𝑥 ≤ 𝑦 but 𝑥 ≥ 𝑧 then;
𝑥 ∧ 𝑦 = 𝑥, 𝑥 ∨ 𝑦 = 𝑦, 𝑥 ∧ 𝑧 = 𝑧 and 𝑥 ∨ 𝑧 = 𝑥.
Now 𝑥 ∧ (𝑦 ∨ 𝑧) ≤ 𝑥 = 𝑥 ∨ 𝑥 = (𝑥 ∧ 𝑦) ∨ (𝑥 ∨ 𝑧). Thus 𝑥 ∧ (𝑦 ∨ 𝑧) = (𝑥 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧)
holds in this case also.Similarly, we can verify other cases. Thus in all the cases we have 𝑥 ∧
(𝑦 ∨ 𝑧) = (𝑥 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧).

Example 3.3.8: If 𝐷 is a distributive lattice and 𝑓: 𝐷 → 𝐿 is lattice morphism then Im𝑓 is a


distributive sublattice of 𝐿.

Proof: We have already seen that 𝐼𝑚𝑓 is a sublattice of 𝐿. Let 𝑓: 𝐷 → 𝐿 is a lattice-morphism


then we have;
𝑓(𝑦 ∨ 𝑧) = 𝑓(𝑦) ∨ 𝑓(𝑧) for all 𝑦, 𝑧 ∈ 𝐷 and
𝑓(𝑥 ∧ (𝑦 ∨ 𝑧)) = 𝑓(𝑥) ∧ (𝑓(𝑦) ∨ 𝑓(𝑧)). (8)
Also by the distributivity of 𝐷 we have;
𝑓(𝑥 ∧ (𝑦 ∨ 𝑧)) = 𝑓((𝑥 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧))
= 𝑓(𝑥 ∧ 𝑦) ∨ 𝑓(𝑥 ∧ 𝑧)
= {𝑓(𝑥) ∧ 𝑓(𝑦)} ∨ {𝑓(𝑥) ∧ 𝑓(𝑧)}. (9)
Combining (8) and (9) we get 𝑓(𝑥) ∧ (𝑓(𝑦) ∨ 𝑓(𝑧)) = {𝑓(𝑥) ∧ 𝑓(𝑦)} ∨ {𝑓(𝑥) ∧ 𝑓(𝑧).
Thus the result follows.

Example 3.3.9: If 𝐷 is a distributive lattice and 𝐸 is a non-empty set then the set 𝐷𝐸 of
mappings 𝑓: 𝐸 → 𝐷 ordered by 𝑓 ⊆ 𝑔, if and only if (for all 𝑥, 𝑦 ∈ 𝐸) 𝑓(𝑥) ≤ 𝑔(𝑥) is a
distributive lattice.

Proof: We have already proved that 𝐷𝐸 is a lattice with respect to this order. So we only need
to verify the distributivity of 𝐷𝐸 . Since 𝐷 is distributive; for all 𝑥 ∈ 𝐸, 𝑓(𝑥), 𝑔(𝑥), ℎ(𝑥) ∈ 𝐷.
We have
𝑓(𝑥) ∧ (𝑔(𝑥) ∨ ℎ(𝑥)) = {𝑓(𝑥) ∧ 𝑔(𝑥)} ∨ {𝑓(𝑥) ∧ ℎ(𝑥)}
if and only if 𝑓 ∧ (𝑔 ∨ ℎ) = (𝑓 ∧ 𝑔) ∨ (𝑓 ∧ ℎ).
Hence 𝐷𝐸 is distributive.

Theorem 3.3.10: Prove that a lattice 𝐿 is distributive if and only if for all 𝑥, 𝑦, 𝑧 ∈ 𝐿;
(𝑥 ∧ 𝑦) ∨ (𝑦 ∧ 𝑧) ∨ (𝑧 ∧ 𝑥) = (𝑥 ∨ 𝑦) ∧ (𝑦 ∨ 𝑧) ∧ (𝑧 ∨ 𝑥). (10)

Proof: Suppose that (10) holds, we show that 𝐿 is distributive. Setting 𝑥 ≥ 𝑧 in (10) the L.H.S
reduces to;
(𝑥 ∧ 𝑦) ∨ (𝑦 ∧ 𝑧) ∨ (𝑧 ∧ 𝑥) = (𝑥 ∧ 𝑦) ∨ (𝑦 ∧ 𝑧) ∨ 𝑧 (since (𝑧 ∧ 𝑥) = 𝑧)
= (𝑥 ∧ 𝑦) ∨ 𝑧 (since 𝑦 ∧ 𝑧 ≤ 𝑧).
Now consider the R.H.S of (10) we have;
(𝑥 ∨ 𝑦) ∧ (𝑦 ∨ 𝑧) ∧ (𝑧 ∨ 𝑥) = (𝑥 ∨ 𝑦) ∧ (𝑦 ∨ 𝑧) ∧ 𝑥 (since 𝑥 ≥ 𝑧)
= 𝑥 ∧ (𝑥 ∨ 𝑦) ∧ (𝑦 ∨ 𝑧) (since meet operation is associative)
= 𝑥 ∧ (𝑦 ∨ 𝑧) (since 𝑥 ≤ 𝑥 ∨ 𝑦)
Hence from (10) we have (𝑥 ∧ 𝑦) ∨ 𝑧 = 𝑥 ∧ (𝑦 ∨ 𝑧),which is a modular law. Taking L.H.S. of
(10) = 𝑢 and R.H.S = 𝑣 so that 𝑢 = 𝑣 which implies 𝑥 ∧ 𝑢 = 𝑥 ∧ 𝑣.
Now 𝑥 ∧ 𝑢 = 𝑥 ∧ [(𝑦 ∧ 𝑧) ∨ (𝑥 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧)] (by assumption)
= (𝑥 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧) (by using modularity of 𝐿)
Also 𝑥 ∧ 𝑣 = 𝑥 ∧ [(𝑥 ∨ 𝑦) ∧ (𝑦 ∨ 𝑧) ∧ (𝑧 ∨ 𝑥)] (by assumption)
= [𝑥 ∧ (𝑥 ∨ 𝑦)] ∧ [(𝑦 ∨ 𝑧) ∧ (𝑧 ∨ 𝑥)] (by using modularity of 𝐿)
= [𝑥 ∧ (𝑧 ∨ 𝑥)] ∧ (𝑦 ∨ 𝑧) (since 𝑥 ∨ 𝑦 ≥ 𝑥).
So that 𝑥 ∧ (𝑦 ∨ 𝑧) = (𝑥 ∧) ∨ (𝑥 ∧ 𝑧).
Conversely suppose that 𝐿 is distributive. Applying distributivity to R.H.S of (10) we get
(𝑥 ∨ 𝑦) ∧ (𝑦 ∨ 𝑧) ∧ (𝑧 ∨ 𝑥) = {(𝑥 ∨ 𝑦) ∧ (𝑦 ∨ 𝑧) ∧ 𝑧} ∨ {(𝑥 ∨ 𝑦) ∧ (𝑦 ∨ 𝑧) ∧ 𝑥}
= {(𝑥 ∨ 𝑦) ∧ 𝑧} ∨ {(𝑦 ∨ 𝑧) ∧ 𝑥} (by connecting lemma )
= (𝑧 ∧ 𝑥) ∨ (𝑧 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧) ∨ (𝑥 ∧ 𝑦)
= (𝑥 ∧ 𝑦) ∨ (𝑦 ∧ 𝑧) ∨ (𝑧 ∧ 𝑥).
Thus distributivity implies (10).
Theorem 3.3.11: If 𝐿 is a distributive lattice, then the ideal lattice 𝔗(𝐿) is distributive.

Proof: Suppose that 𝐿 is distributive, let 𝐼, 𝐽, 𝐾 ∈ 𝔗(𝐿) we have;


𝐼 ∨ (𝐽 ∧ 𝐾) ⊆ (𝐼 ∨ 𝐽) ∧ (𝐼 ∨ 𝐾) holds for all lattices.
For the reverse inclusion if 𝑥 ∈ (𝐼 ∨ 𝐽) ∧ (𝐼 ∨ 𝐾) then 𝑥 ∈ (𝐼 ∨ 𝐽) and 𝑥 ∈ 𝐼 ∨ 𝐾, which
implies that there exists 𝑎1 , 𝑎2 , ∈ 𝐼, 𝑏 ∈ 𝐽 , 𝑐 ∈ 𝐾 with 𝑥 ≤ 𝑎1 ∨ 𝑏 and 𝑥 ≤ 𝑎2 ∨ 𝑐.Since 𝐼 is
ideal, 𝑎 = 𝑎1 ∨ 𝑎2 ∈ 𝐼 this implies 𝑥 ≤ (𝑎 ∨ 𝑏) ∧ (𝑎 ∨ 𝑐) = 𝑎 ∨ (𝑏 ∧ 𝑐. Thus 𝑥 ∈ 𝐼 ∨ (𝐽 ∧ 𝐾).
Thus we have(𝐼 ∨ 𝐽 ) ∧ (𝐼 ∨ 𝐾) ⊆ 𝐼 ∨ (𝐽 ∧ 𝐾). This gives 𝐼 ∨ (𝐽 ∧ 𝐾) = (𝐼 ∨ 𝐽) ∧ (𝐼 ∨ 𝐾).
Thus 𝐿 is distributive.

Definition 3.3.12: The direct product 𝑃𝑄 of two partially ordered sets 𝑃 and 𝑄 is the set of all
couples (𝑥, 𝑦) with 𝑥 ∈ 𝑃, 𝑦 ∈ 𝑄 partially ordered by the rule that (𝑥1 , 𝑦1 ) ≤ (𝑥2 , 𝑦2 ) if and
only if 𝑥1 ≤ 𝑥2 in 𝑃 and 𝑦1 ≤ 𝑦2 in 𝑄.

Proposition 3.3.13: The direct product 𝐿𝑀 of any two distributive lattices is a distributive
lattice.

Proof: For any two elements(𝑥1 , 𝑦1 ) and (𝑥2 , 𝑦2 ) in 𝐿𝑀 the elements (𝑥1 ∨ 𝑥2 , 𝑦1 ∨
𝑦2 ) contains both of the elements (𝑥1 , 𝑦1 ), (𝑥2 , 𝑦2 ), hence is an upper bound for the pair. Let
(𝑢, 𝑣) be any upper bound for the pair (𝑥1 , 𝑦1 ) , (𝑥2 , 𝑦2 ) then 𝑢 ≥ 𝑥1 , 𝑥2 implies 𝑢 ≥ 𝑥1 ∨ 𝑥2 .
Likewise we have 𝑣 ≥ 𝑦1 ∨ 𝑦2 . Hence (𝑥1 ∨ 𝑦1 , 𝑥2 ∨ 𝑦2 ) = (𝑥1 , 𝑦1 ) ∨ (𝑥2 , 𝑦2 ). Dually we
can show that;
(𝑥1 ∧ 𝑥2 , 𝑦1 ∧ 𝑦2 ) = (𝑥1 , 𝑦1 ) ∧ (𝑥2 , 𝑦2 ). Which proves that 𝐿𝑀 is a lattice.
To prove distributivity, let 𝑥, 𝑦, 𝑧 ∈ 𝐿𝑀 then we have 𝑥 = (𝑥1 , 𝑦1 ), 𝑦 = (𝑥2 , 𝑦2 ) , 𝑧 =
(𝑥3 , 𝑦3 ) where 𝑥1 , 𝑥2 , 𝑥3 ∈ 𝐿 and 𝑦1 , 𝑦2 , 𝑦3 ∈ 𝑀.
Now, 𝑥 ∧ (𝑦 ∨ 𝑧) = (𝑥1, 𝑦1 ) ∧ [(𝑥2 , 𝑦2 )∨(𝑥3 , 𝑦3 )] (by assumption)
= (𝑥1, 𝑦1)∧(𝑥2 ∨ 𝑥3 , 𝑦2 ∨ 𝑦3 ) (by definition)
= (𝑥1 ∧ (𝑥2 ∨ 𝑥3 ), 𝑦1 ∧ (𝑦2 ∨ 𝑦3 ) (by definition)
= [(𝑥1 , 𝑦1 ) ∧ (𝑥2 , 𝑦2 )] ∨ [(𝑥1, 𝑦1 )∧(𝑥3 , 𝑦3 )] (since 𝐿, 𝑀are distributive)
= (𝑥 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧).
So 𝐿𝑀 is distributive.
Proposition 3.3.14: A lattice 𝐿 is distributive if and only if
𝑥 ∨ (𝑦 ∧ 𝑧) ≥ (𝑥 ∨ 𝑦) ∧ 𝑧 for all 𝑥, 𝑦, 𝑧 ∈ 𝐿.

Proof: Suppose that 𝐿 is distributive, then for all 𝑥, 𝑦, 𝑧 ∈ 𝐿 we have


𝑥 ∨ (𝑦 ∧ 𝑧) = (𝑥 ∨ 𝑦) ∧ (𝑥 ∨ 𝑧) ≥ (𝑥 ∨ 𝑦) ∧ 𝑧. (since 𝑥 ∨ 𝑧 ≥ 𝑧)
Conversely suppose that for all 𝑥, 𝑦, 𝑧 ∈ 𝐿, 𝑥 ∨ (𝑦 ∧ 𝑧) ≥ (𝑥 ∨ 𝑦) ∧ 𝑧 holds. We show 𝐿 is
distributive. Since 𝑥 ∨ (𝑦 ∧ 𝑧) ≥ (𝑥 ∨ 𝑦) ∧ (𝑥 ∨ 𝑧) holds for every lattice, we have to only
show reverse inequality. We have (𝑥 ∨ 𝑦) ∧ (𝑥 ∨ 𝑧) = (𝑥 ∨ 𝑧) ∧ (𝑥 ∨ 𝑦) ≥ ((𝑥 ∨ 𝑧) ∧ 𝑥) ∨
𝑦 = 𝑥 ∨ 𝑦 ≥ 𝑥 ∨ (𝑦 ∧ 𝑧). This implies 𝑥 ∨ (𝑦 ∧ 𝑧) = (𝑥 ∨ 𝑦) ∧ (𝑥 ∨ 𝑧). Hence 𝐿 is
distributive.

Theorem 3.3.15: A lattice 𝐿 is distributive if and only if it has no sublattice of either of the
forms given below. Equivalently, 𝐿 is distributive if and only if 𝑧 ∧ 𝑥 = 𝑧 ∧ 𝑦 and 𝑧 ∨ 𝑥 = 𝑧 ∨
𝑦 implies 𝑥 = 𝑦.

Proof: Observe first that the two statements are equivalent. In fact if 𝑥 ∧ 𝑧 = 𝑦 ∧ 𝑧 and 𝑥 ∨ 𝑧 =
𝑦 ∨ 𝑧 with 𝑥 ≠ 𝑦 then the two lattices shown above arise from the cases 𝑥‖⃦𝑦.
Now suppose that 𝐿 is distributive and that there exists 𝑥, 𝑦, 𝑧 ∈ 𝐿 such that 𝑥 ∧ 𝑧 = 𝑦 ∧ 𝑧
and
𝑥 ∨ 𝑧 = 𝑦 ∨ 𝑧 then we have;
𝑥 = 𝑥 ∧ ( 𝑥 ∨ 𝑧) (since 𝑥 ≤ 𝑥 ∨ 𝑧)
= 𝑥 ∧ (𝑦 ∨ 𝑧) (since 𝑧 ∨ 𝑥 = 𝑧 ∨ 𝑦)
= (𝑥 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧) (by distributivity of 𝐿)
= (𝑥 ∧ 𝑦) ∨ (𝑦 ∧ 𝑧) (since 𝑥 ∧ 𝑧 = 𝑦 ∧ 𝑧)
= 𝑦 ∧ (𝑥 ∨ 𝑧) (by distributivity of 𝐿)
(𝑦
= 𝑦 ∧ ∨ 𝑧) (since 𝑧 ∨ 𝑥 = 𝑧 ∨ 𝑦)
= 𝑦. (since 𝑦 ∨ 𝑧 ≥ 𝑦)
Consequently, 𝐿 has no sublattice of either of the forms.
Conversely if 𝐿 has no sublattice of either of the above forms then by the theorem (3.1.6) 𝐿
must be modular.
Given 𝑎, 𝑏, 𝑐 ∈ 𝐿, we define 𝑎⋆ = (𝑏 ∨ 𝑐) ∧ 𝑎, 𝑏 ⋆ = (𝑐 ∨ 𝑎) ∧ 𝑏, 𝑐 ⋆ = (𝑎 ∨ 𝑏) ∧ 𝑐, then
𝑎⋆ ∧ 𝑐 ⋆ = [(𝑏 ∨ 𝑐) ∧ 𝑎] ∧ (𝑎 ∨ 𝑏) ∧ 𝑐 = (𝑏 ∨ 𝑐) ∧ (𝑎 ∧ 𝑐) = [(𝑏 ∨ 𝑐) ∧ 𝑐] ∧ 𝑎 = 𝑎 ∧ 𝑐.
Similarly 𝑏 ⋆ ∧ 𝑐 ⋆ = 𝑏 ∧ 𝑐 and 𝑎⋆ ∧ 𝑏 ⋆ = 𝑎 ∧ 𝑏
Now let 𝑑 = (𝑎 ∨ 𝑏) ∧ (𝑏 ∨ 𝑐) ∧ (𝑐 ∨ 𝑎) then;
𝑎⋆ ∨ 𝑐 ⋆ = 𝑎⋆ ∨ [(𝑎 ∨ 𝑏) ∧ 𝑐] (by assumption)

= (𝑎 ∨ 𝑐) ∧ (𝑎 ∧ 𝑏) (by using modularity of
𝐿)
= [((𝑏 ∨ 𝑐) ∧ 𝑎) ∨ 𝑐] ∧ (𝑎 ∨ 𝑏) ( since 𝑎⋆ = (𝑏 ∨ 𝑐) ∧ 𝑎)
= [(𝑏 ∨ 𝑐) ∧ (𝑎 ∨ 𝑐)] ∧ (𝑎 ∨ 𝑏) = 𝑑
By symmetry we deduce that 𝑎 ∨ 𝑐 ⋆ = 𝑎⋆ ∨ 𝑏 ⋆ = 𝑏 ⋆ ∨ 𝑐 ⋆ = 𝑑.

𝑐 ⋆ ∨ 𝑎⋆ ∨ (𝑏 ∧ 𝑐) = 𝑑 ∨ (𝑏 ∧ 𝑐) = 𝑑 and
We now observe that { ⋆
𝑐 ∧ [𝑎⋆ ∨ (𝑏 ∧ 𝑐) = (𝑐 ⋆ ∧ 𝑎⋆ ) ∨ (𝑏 ∧ 𝑐) = (𝑎 ∧ 𝑏) ∨ (𝑏 ∧ 𝑐)
𝑐 ⋆ ∨ 𝑏 ⋆ ∨ (𝑏 ∧ 𝑐) = 𝑑 and
and by symmetry {
𝑐 ⋆ ∧ [𝑏 ⋆ ∨ (𝑎 ∧ 𝑐)] = (𝑎 ∧ 𝑐) ∨ (𝑏 ∧ 𝑐).
By the hypothesis we deduce that 𝑎 ⋆ ∨ (𝑏 ∧ 𝑐) = 𝑏 ⋆ ∨ (𝑎 ∧ 𝑐) whence;
𝑎⋆ ∨ (𝑏 ∧ 𝑐) = 𝑎 ⋆ ∨ (𝑏 ∧ 𝑐) ∨ 𝑏 ⋆ ∨ (𝑎 ∧ 𝑐) = 𝑎⋆ ∨ 𝑏 ⋆ =
𝑑.
It follows from this that(𝑎 ∨ 𝑏) ∧ 𝑐 = 𝑐 ⋆ = 𝑐 ⋆ ∧ 𝑑 = 𝑐 ⋆ ∧ (𝑎⋆ ∨ (𝑏 ∧ 𝑐) = (𝑐 ⋆ ∧ 𝑎⋆ ) ∨ (𝑏 ∧
𝑐) = (𝑎 ∧ 𝑐) ∨ (𝑏 ∧ 𝑐). Thus 𝐿 is distributive.

Example 3.3.16: The set 𝑁 = {1,2,3, … } ordered by divisibility is a distributive lattice.

Solution: We know that (𝑁;|) is a lattice with 𝑠𝑢𝑝 {𝑚, 𝑛} = 𝑙𝑐𝑚{𝑚, 𝑛} and 𝑖𝑛𝑓 (𝑚, 𝑛) =
ℎ𝑐𝑓{𝑚, 𝑛}. To show that 𝑁 is distributive we use above theorem. Let 𝑥, 𝑦, 𝑧 ∈ 𝑁 such that 𝑥 ∨
𝑦 = 𝑧 ∨ 𝑦 and 𝑥 ∧ 𝑦 = 𝑧 ∧ 𝑦 which implies 𝑙𝑐𝑚{𝑥, 𝑦} = 𝑙𝑐𝑚(𝑧, 𝑦) and gcd{𝑥, 𝑦} =
gcd {𝑧, 𝑦}, implies;
𝑥𝑦 𝑧𝑦
=
gcd {𝑥, 𝑦} gcd {𝑥, 𝑦}

this implies 𝑥𝑦 = 𝑧𝑦, so 𝑥 = 𝑧. Thus by Theorem (3.3.15) 𝑁 is distributive.

Proposition 3.3.16: Every lattice is distributive if and only if, for all ideals 𝐼, 𝐽 ∈ 𝐿;
𝐼 ∨ 𝐽 = {𝑖 ∨ 𝑗: 𝑖 ∈ 𝐼 , 𝑗 ∈ 𝐽}.

Proof: Suppose 𝐿 is distributive and let us take 𝑡 ∈ 𝐼 ∨ 𝐽 then 𝑡 ≤ 𝑖 ∨ 𝑗 with 𝑖 ∈ 𝐼 , 𝑗 ∈ 𝐽. Then


by distributivity of 𝐿 we have 𝑡 = 𝑡 ∧ (𝑖 ∨ 𝑗) = (𝑡 ∧ 𝑖) ∨ (𝑡 ∧ 𝑗) = 𝑖1 ∨ 𝑖2 ; where𝑖1 = 𝑡 ∧ 𝑖 ∈
𝐼 and 𝑗1 = 𝑡 ∧ 𝑗 ∈ 𝐽, since 𝐼, 𝐽 are ideals of 𝐿. Thus 𝑡 = 𝑖1 ∨ 𝑗1 for 𝑖1 ∈ 𝐼 , 𝑗1 ∈ 𝐽. This gives 𝐼 ∨
𝐽 = {𝑖 ∨ 𝑗 ∶ 𝑖 ∈ 𝐼 , 𝑗 ∈ 𝐽}.
For the converse suppose that 𝐼 ∨ 𝐽 = {𝑖 ∨ 𝑗 ∶ 𝑖 ∈ 𝐼, 𝑗 ∈ 𝐽} and suppose if possible 𝐿 is non
distributive then there exists three elements 𝑎, 𝑏, 𝑐 (as in M3). Now consider the principal ideal
𝐼 = (𝑏) 𝐽 = (𝑐); we have 𝑎 ≤ 𝑏 ∨ 𝑐 and so ∈ 𝐼 ∨ 𝐽 . But we claim that 𝑎 can’t be written
as 𝑎 = 𝑖 ∨ 𝑗 because if it is so then 𝑖 ≤ 𝑎 , 𝑗 ≤ 𝑎. Since 𝑗 ∈ 𝐽 = (𝑐) so 𝑗 ≤ 𝑐. Combining 𝑗 ≤
𝑎, 𝑗 ≤ 𝑐 implies 𝑗 ≤ 𝑎 ∧ 𝑐 = 0 < 𝑏 ∈ (𝑏) = 𝐼. Thus 𝑎 = 𝑖 ∨ 𝑗 ∈ 𝐼 = (𝑏) = {0, 𝑏}. Which is a
contra- diction. Hence 𝐿 is distributive.

3.4: Complemented lattices:


Before defining the complemented lattice, we first prove the following lemma;

Lemma 3.4.1: Let (𝐿 , ≤ ) be a lattice with universal upper and lower bounds 0 and 1 . For
any element 𝑎 ∈ 𝐿 ; 𝑎 ∨ 1 = 𝑎 , 𝑎 ∧ 1 = 𝑎 and 𝑎 ∨ 0 = 𝑎 , 𝑎 ∧ 0 = 0 .

Proof : We know that for any 𝑎, 𝑏 ∈ 𝐿 ; 𝑎 ≤ 𝑎 ∨ 𝑏 and 𝑎 ∧ 𝑏 ≤ 𝑎. (1)


So by using (1), we get 1 ≤ 𝑎 ∨ 1 (2)
Since 1 is the upper bound of 𝐿 and 𝑎 ∨ 1 ∈ 𝐿. This implies that 𝑎 ∨ 1 ≤ 1 (3)
From (2) and (3) we get 𝑎 ∨ 1 = 1.
Since 𝑎 ∧ 1 ≤ 𝑎 . (4)
Also by reflexivity of 𝐿, 𝑎 ≤ 𝑎 and 𝑎 ≤ 1 , therefore we get 𝑎 ≤ 𝑎 ∧ 1 (5)
Thus from (4) and (5) we get 𝑎 ∧ 1 = 1. Similarly we can prove that 𝑎 ∨ 0 = 𝑎 and 𝑎 ∧
0 = 0.

Complemented elements:
Definition 3.4.2: If 𝐿 is a bounded lattice then we say that 𝑦 ∈ 𝐿 is a complement of 𝑥 ∈ 𝐿 if
𝑥 ∧ 𝑦 = 0 and 𝑥 ∨ 𝑦 = 1. In this case we say 𝑥 is complemented element of 𝐿.

Since meet and join operations are commutative, therefore 𝑥 ∧ 𝑦 = 0 if and only if 𝑦 ∧
𝑥 = 0 and 𝑥 ∨ 𝑦 = 1 if and only if 𝑦 ∨ 𝑥 = 1. Thus by definition complement is symmetric
in 𝑥 and 𝑦, so that 𝑦 is complement of 𝑥 if and only if 𝑥 is complement of y. Thus we
conclude that every complement of a complemented element is itself complemented.

Example 3.4.3: In each of the lattices

the first of which is non modular and second is modular but not distributive. The elements 𝑥
and y are complements of z thus in this case 𝑧 has two complements. In general complement
may not be unique. Also from above lemma we have 0 ∧ 1 = 0 and 0 ∨ 1 = 1; which shows
that that 0 and 1 are complements of each other. It is easy to show that 1 is the only complement
of 0. In fact if 𝑐 ≠ 1 is a complement of 0 and 𝑐 ∈ 𝐿; then 0 ∧ 𝑐 = 0 and 0 ∨ 𝑐 = 1 , also
since 0 ≤ 𝑐, therefore 0 ∨ 𝑐 = 𝑐 and 𝑐 ≠ 1 leads to a contradiction. In a similar manner we
can show that 0 is the only complement of 1.

Definition 3.4.4: We say that a lattice 𝐿 is complemented if every element of 𝐿 is


complemented.

Example 3.4.5: Let 𝑉 be a vector space and consider the lattice 𝐿(𝑉) of subspaces of 𝑉. We
have seen that 𝐿(𝑉) is modular (Example 3.1.8). It is also complemented. To establish this we
observe that if 𝑊 is a subspace of 𝑉, then any basis of 𝑊 can be extended to a basis of 𝑉 by
means of a set 𝐴 = {𝑥𝛼 ∶ 𝛼 ∈ 𝐼} of elements of 𝑉. The subspace generated by 𝐴 then serves as
a complement of 𝑊 in (𝑉).

Definition 3.4.6: We say that a lattice 𝐿 is relatively complemented if every interval [𝑥, 𝑦] of
𝐿 is complemented. A complement in [𝑥, 𝑦] of 𝑎 ∈ [𝑥, 𝑦] is called relative complement of 𝑎.

Theorem 3.4.7: Any complemented modular lattice 𝑀 is relatively complemented.

Proof: Let 𝐿 be a complemented modular lattice. Given any [𝑎, 𝑏] ⊆ 𝐿 and 𝑥 ∈ [𝑎, 𝑏], let 𝑦 be
the complement of 𝑥 in 𝐿. Consider the element
𝑧 = 𝑏 ∧ (𝑎 ∨ 𝑦) = (𝑏 ∧ 𝑦) ∨ 𝑎 (since 𝐿 is modular)
Then clearly 𝑧 ≥ 𝑎 and 𝑧 ≤ 𝑏, so that 𝑧 ∈ [𝑎, 𝑏] and by modularity,
𝑥 ∧ 𝑧 = 𝑥 ∧ (𝑦 ∨ 𝑎) = (𝑥 ∧ 𝑦) ∨ 𝑎 = 0 ∨ 𝑎 = 𝑎;
𝑥 ∨ 𝑧 = 𝑥 ∨ (𝑏 ∧ 𝑦) = (𝑥 ∨ 𝑦) ∧ 𝑏 = 1 ∧ 𝑏 = 𝑏.
Thus 𝑧 is complement of 𝑥 in [𝑎, 𝑏].

Theorem 3.4.8: In a distributive lattice all complements and relative complements are unique.

Proof: We first show that if an element in a distributive lattice has a complement then this
complement is unique. Suppose that an element 𝑎 has two complements 𝑏 and 𝑐. We have
𝑏 = 𝑏∧1 (since 1 ≥ 𝑏)
= 𝑏 ∧ (𝑎 ∨ 𝑐) (since 𝑐 is complement of 𝑎)
= (𝑏 ∧ 𝑎) ∨ (𝑏 ∧ 𝑐) (by definition of distributivity)
= 0 ∨ (𝑏 ∧ 𝑐) (since 𝑏 ∧ 𝑎 = 0)
= (𝑎 ∧ 𝑐) ∨ (𝑏 ∧ 𝑐) (since 𝑎 ∧ 𝑐 = 0)
= (𝑎 ∨ 𝑏) ∧ 𝑐 (by definition of distributive lattice)
=1∧𝑐 (since 𝑎 ∨ 𝑏 = 1)
=𝑐 (since 1 ≥ 𝑐).

Uniqueness of relative complements:

Given any [𝑎, 𝑏] ⊆ 𝐿 (Distributive lattice) and 𝑥 ∈ [𝑎, 𝑏]. Let 𝑝 be the complement of 𝑥 in 𝐿
and let 𝑞 be the relative complement of 𝑥. we have
𝑝 =𝑝∧𝑏 (since 𝑝 ≤ 𝑏)
= 𝑝 ∧ (𝑥 ∨ 𝑞) (since 𝑞 is the relative complement of 𝑥)
= (𝑝 ∧ 𝑥) ∨ (𝑝 ∧ 𝑞) (by distributivity of 𝐿)
= 𝑎 ∨ (𝑝 ∧ 𝑞) (since 𝑝 is relative complement of 𝑥)
= (𝑥 ∧ 𝑞) ∨ (𝑝 ∧ 𝑞) (since 𝑞 is relative complement of 𝑥)
= (𝑥 ∨ 𝑝) ∧ 𝑞 (by distributivity of 𝐿)
=𝑏∧𝑞 (since 𝑝 is relative complement of 𝑥)
=𝑞 (since 𝑞 ≤ 𝑏).

3.5 Uniquely complemented lattice

In view of above Theorem, it is natural to consider complemented lattice in which complements


are unique. In such a lattice we shall use the notation 𝑥′ to denote the unique complement of 𝑥.
Denoting likewise the complement of 𝑥′ by 𝑥′′ we have then by the symmetry of the definition
𝑥 is the complement of 𝑥′, hence by uniqueness 𝑥′′= 𝑥. There is an important history concerning
these lattices, it long has been suspected that every uniquely complemented lattice is
distributive. However, this is not the case. In fact, R. P. Dilworth established the remarkable
result that every lattice can be embedded in a uniquely complemented lattice. The following
result will illustrate the difficulty in seeking uniquely complemented lattices that are not
distributive.
Definition 3.5.1: If 𝐿 is a lattice with bottom element 0 the by an atom of 𝐿 we shall mean an
element 𝑎 such that 0 ≺ 𝑎. If for every 𝑥 ∈ 𝐿\{0} there is an atom 𝑎 such that 𝑎 ≤ 𝑥 then we
say that 𝐿 is atomic. Dually, we have the notion of a coatom and that of 𝐿 being coatomic.

Theorem 3.5.2: (Birkhoff -Ward) Every uniquely complemented atomic lattice is


distributive.

Proof: We establish the proof by means of following non-trivial observations.


(1) 𝐼𝑓𝑥 > 𝑦 𝑡ℎ𝑒𝑛 𝑡ℎ𝑒𝑟𝑒 𝑖𝑠 𝑎𝑛 𝑎𝑡𝑜𝑚 𝑝 𝑠𝑢𝑐ℎ𝑡ℎ𝑎𝑡 𝑝 ≤ 𝑥 𝑎𝑛𝑑 𝑝 ∧ 𝑦 = 0.

In fact 𝑥 > 𝑦 gives 𝑦′ ∨ 𝑥 = 1. We can’t therefore have 𝑦’ ∧ 𝑥 = 0. For if 𝑦’ ∧ 𝑥 = 0 then 𝑥 =


𝑦′′ = 𝑦, which is a contradiction.Thus 𝑦′ ∧ 𝑥 > 0 and so there is an atom 𝑝 such that 𝑝 ≤ 𝑥 ∧
𝑦′ whence 𝑝 ≤ 𝑥 and 𝑝 ≤ 𝑦′, the later giving 𝑝 ∧ 𝑦 = 0.

(2) If x and y contain the same atom then x = y.


In fact, if 𝑥 and 𝑦 contain the same atom say 𝑎 so that 0 ≺ 𝑎 and 𝑎 ≤ 𝑥 and𝑎 ≤ 𝑦 gives 𝑎 ≤
𝑥 ∧ 𝑦. Thus it follows that if 𝑥 and 𝑦 contain the same atom so do 𝑥 and 𝑥 ∧ 𝑦. Suppose that
𝑥 ∧ 𝑦 < 𝑥. Then by (1) there would exist an atom 𝑝 contained in 𝑥 but not contained in 𝑥 ∧ 𝑦.
This contradiction shows that 𝑥 ∧ 𝑦 = 𝑦 whence 𝑥 ≤ 𝑦. Similarly, we have 𝑦 ≤ 𝑥 and so 𝑥 =
𝑦.

(3) The complement of an atom is a coatom.


Let 𝑝 be an atom then 𝑝 ≠ 0. We claim that 𝑝′ ≠ 1. For if 𝑝′ = 1 then 𝑝 = 𝑝′′ = 0, which is
a contradiction. Suppose that 𝑝′ < 𝑥 < 1. Then 𝑝 ∨ 𝑥 = 1. But 𝑝 being an atom either 𝑝 ∧ 𝑥 =
𝑝 or 𝑝 ∧ 𝑥 = 0. The former gives 𝑝 ≤ 𝑥, which implies 𝑥 = 𝑝 ∨ 𝑥 = 1 which is a
contradiction.

(4) If 𝑝 and 𝑞 are distinct atoms then 𝑞 ≤ 𝑝’.


By (3), both 𝑝′and 𝑞′ are coatoms. Suppose that 𝑞 ≰ 𝑝′. Then necessarily 𝑞 ∨ 𝑝′ = 1 and 𝑞 ∧
𝑝′ = 0 which is a contradiction as 𝑞 = 𝑝′′ = 𝑝.

(5) If 𝑝 is an atom then 𝑝 ∧ 𝑥 = 0 if and only if 𝑥 ≤ 𝑝′.


If 𝑝 ∧ 𝑥 = 0 then 𝑝 ≰ 𝑥 so every atom 𝑞 under 𝑥 is distinct from 𝑝. Thus, by (4) every atom
under 𝑥 is an atom under 𝑥 ∧ 𝑝′. Since the converse is also true, it follows by (2) that 𝑥 = 𝑥 ∧
𝑝′, whence 𝑥 ≤ 𝑝′.

(6) If 𝑝 is an atom then 𝑝 ≤ 𝑥 ∨ 𝑦 if and only if 𝑝 ≤ 𝑥 or 𝑝 ≤ 𝑦.

Suppose that 𝑝 ≤ 𝑥 ∨ 𝑦. since 𝑝 is an atom we have either 𝑝 ∧ 𝑥 = 𝑝 or 𝑝 ∧ 𝑥 = 0, i.e, either


𝑝 ≤ 𝑥 or, by (5), 𝑥 ≤ 𝑝′. Likewise, either 𝑝 ≤ 𝑦 or 𝑦 ≤ 𝑝′ since then 𝑥 ∨ 𝑦 ≤ 𝑝′, which gives
𝑝 ≤ 𝑥 ∨ 𝑦 ≤ 𝑝′ and the contradiction 𝑝 = 0. Thus we must have either 𝑝 ≤ 𝑥 or 𝑝 ≤ 𝑦.

With these technical details to hand suppose now that 𝒜 is the set of atoms of 𝐿. For every 𝑥 ∈
𝐿 let 𝒜𝑥 = {𝑎 ∈ 𝒜: 𝑎 ≤ 𝑥}, and consider the mapping 𝑓: 𝐿 → ℙ(𝒜) given by the prescription
𝑓(𝑥) = 𝒜𝑥 . It is clear from (2) that 𝑓 is injective. Moreover, using (6) we see that
𝒜𝑥∨𝑦 = 𝒜𝑥 ∪ 𝒜𝑦 and so 𝑓 is a join morphism. Now clearly for 𝑝 ∈ 𝒜, we have 𝑝 ≤ 𝑥 ∧ 𝑦 if
and only if 𝑝 ≤ 𝑥 and 𝑝 ≤ 𝑦. It follows that 𝒜𝑥∧𝑦 = 𝒜𝑥 ∩ 𝒜𝑦 and so 𝑓 is a lattice morphism.
Thus 𝐿 ≃ 𝐼𝑚𝑓 where 𝐼𝑚𝑓 is a sublattice of the distributive lattice ℙ(𝒜). Consequently, 𝐿 is
distributive.

Corollary 3.5.3: If 𝐿 is complete, then 𝐿 ≃ ℙ(𝒜).

Proof Suppose that 𝐿 is complete and let 𝑁 = {𝑝𝑖 : 𝑖 ∈ 𝐼} where each 𝑝𝑖 ∈ 𝒜, let q be an atom
with 𝑞 ≤∨𝑖∈𝐼 𝑝𝑖 . Then necessarily 𝑞 = 𝑝𝑖 for some 𝑖 ∈ 𝐼. In fact, suppose that 𝑞 ≠ 𝑝𝑖 for all
𝑖 ∈ 𝐼. Then (4) gives 𝑝𝑖 ≤ 𝑞′ whence we have the contradiction 𝑞 ≤∨𝑖∈𝐼 𝑝𝑖 ≤ 𝑞′. We conclude
that 𝑁 = 𝒜𝑥 , where 𝑥 =∨𝑖∈𝐼 𝑝𝑖 . Hence 𝑓is also surjective and 𝐿 ≃ ℙ(𝒜)

Definition 3.5.4: By the width of the lattice 𝐿 we mean the supremum of the cardinalities of
the antichains in 𝐿.

Corollary 3.5.5: If a uniquely complemented lattice satisfies descending chain condition or


ascending chain condition, then it is distributive.

Proof: Under the descending chain conditions every non-zero element of lattice contains an
atom so 𝐿 is atomic. Thus by Birkhoff Ward’s theorem 𝐿 is distributive.

Corollary 3.5.6: Every uniquely complemented lattice of finite width is distributive.

Proof: Let 𝐿 be a uniquely complemented lattice of finite width. We show 𝐿 satisfies


descending chain conditions. Suppose this is not so and 𝑥1 > 𝑥2 > 𝑥3 > ⋯ is an infinite
descending chain. Observe first that for each𝑖, 𝑥𝑖 ∧ 𝑥′𝑖+1 ≠ 0. For if 𝑥𝑖 ∧ 𝑥𝑖+1 = 0 then 𝑥′𝑖+1
has 𝑥𝑖 as its complement, which is not possible as 𝐿 is uniquely complemented. Now for each
𝑖, define
𝑦𝑖 = 𝑥𝑖 ∧ 𝑥𝑖+1
Then for𝑖 < 𝑗, we have 𝑦𝑗 ≤ 𝑥𝑗 ≤ 𝑥𝑖+1 hence 𝑦𝑖 ∧ 𝑦𝑗 ≤ 0, i,e. the elements 𝑦1 , 𝑦2 , … form an
infinite antichain, which contradicts our hypothesis.

Corollary 3.5.7: A finite uniquely complemented lattice is distributive.

Proof: Since 𝐿 is finite, therefore every anti-chain in 𝐿 is finite. Thus 𝐿 is uniquely


complemented lattice of finite width, so by above corollary 𝐿 is distributive.

Theorem3.5.8: In a uniquely complemented lattice 𝐿 the following properties of


complementation are equivalent:
(1) (for all 𝑥 ≤ 𝑦) implies 𝑦′ ≤ 𝑥′;
(2) (for all 𝑥, 𝑦 ∈ 𝐿) (𝑥 ∧ 𝑦)′ = 𝑥′ ∨ 𝑦′;
(3) (for all 𝑥, 𝑦 ∈ 𝐿) (𝑥 ∨ 𝑦) = 𝑥′ ∧ 𝑦′.
Moreover, each implies that 𝐿 is distributive.

Proof: (1) ⇒(2): Suppose that (1) holds, that is 𝑥 ↦ 𝑥′ is antitone. Then from 𝑥 ∧ 𝑦 ≤ 𝑥, 𝑦 we
obtain (𝑥 ∧ 𝑦)′ ≥ 𝑥′ ∨ 𝑦′ and consequently 𝑥 ∧ 𝑦 = (𝑥 ∧ 𝑦)′′ ≤ (𝑥′ ∨ 𝑦′)′. Likewise 𝑥′, 𝑦′ ≤
𝑥′ ∨ 𝑦′ gives (𝑥′ ∨ 𝑦′)′ ≤ 𝑥′′ ∧ 𝑦’’ = 𝑥 ∧ 𝑦. Hence 𝑥 ∧ 𝑦 = (𝑥′ ∨ 𝑦′)′ and consequently
(𝑥 ∧ 𝑦)′ = (𝑥′ ∨ 𝑦′)′′ = 𝑥′ ∨ 𝑦′.
(2) ⇒ (1) : Suppose that for all 𝑥, 𝑦 ∈ 𝐿 (𝑥 ∧ 𝑦)′ = 𝑥′ ∨ 𝑦′; let 𝑥 ≤ 𝑦, this gives 𝑥 ∧ 𝑦 = 𝑥
so that 𝑥′ = 𝑥′ ∨ 𝑦′ ≥ 𝑦′.
A dual proof establishes the equivalence of (1) and (3)
As for the distributivity, suppose that any one of the above conditions hold. Then we have the
𝑦 = 𝑥 ∨ (𝑥′ ∧ 𝑦); (4)
property that 𝑥 ≤ 𝑦 implies {
𝑥 = (𝑥 ∨ 𝑦′) ∧ 𝑦. (5)
In fact if 𝑥 ≤ 𝑦, then since
[𝑥 ∨ (𝑥′ ∧ 𝑦′)]′ ∨ 𝑦 ≥ [𝑥 ∨ (𝑥′ ∧ 𝑦)]′ ∨ 𝑥 ∨ (𝑥′ ∧ 𝑦)
= 𝑥 ∧ (𝑥′ ∧ 𝑦) ∨ 𝑥 ∨ (𝑥′ ∧ 𝑦)
= (𝑥′ ∧ 1) ∨ 𝑥
= 𝑥′ ∨ 𝑥 = 1.
We have since 1 is the top element therefore [𝑥 ∨ (𝑥′ ∧ 𝑦′)]′ ∨ 𝑦 ≤ 1, this gives
[𝑥 ∨ (𝑥′ ∧ 𝑦′)]′ ∨ 𝑦 = 1; and by (3),
[𝑥 ∨ (𝑥′′ ∧ 𝑦′)]′ ∧ 𝑦 = 𝑥′ ∧ (𝑥′ ∧ 𝑦)′ ∧ 𝑦 = 0
Thus 𝑦 = [𝑥 ∨ (𝑥′ ∧ 𝑦′)]′′ = 𝑥 ∨ (𝑥′ ∧ 𝑦) and so (4) holds. As for (5), using (4) we see that
𝑥 ≤ 𝑦 implies 𝑦 ′ ≤ 𝑥 ′ so by (4) 𝑥′ = 𝑦′ ∨ (𝑦′′ ∧ 𝑥) which implies 𝑥 = 𝑥′′ = 𝑦 ∧ (𝑦′ ∨ 𝑥), so
(5) holds.
We now use (4) and (5) to show that 𝐿 is distributive. For this purpose suppose that 𝑎, 𝑏, 𝑐 ∈ 𝐿
are such that 𝑎 ∨ 𝑐 = 𝑏 ∨ 𝑐 = 𝛼 and 𝑎 ∧ 𝑐 = 𝑏 ∧ 𝑐 = 𝛽. Then on the one hand 𝑎 ∨ 𝛼 ′ ∨
(𝑐 ∧ 𝛽 ′ ) = 𝑎 ∨ 𝛽 ∨ 𝛼 ′ ∨ (𝑐 ∧ 𝛽 ′ ).
Now since 𝛽 ≤ 𝑐, so by (4) 𝑐 = 𝛽 ∨ (𝛽 ′ ∧ 𝑐). Therefore 𝑎 ∨ 𝛼 ′ ∨ (𝑐 ∧ 𝛽′) = 𝑎 ∨ 𝑐 ∨ 𝛼 ′ = 𝑎 ∨
𝛼′and similarly 𝑏 ∨ 𝛼 ′ ∨ (𝑐 ∧ 𝛽′) = 1. On the other hand,
𝑎 ∧ [𝛼′ ∨ (𝑐 ∧ 𝛽′)] = 𝑎 ∧ 𝑐 ∧ 𝛽′ = 𝛽 ∧ 𝛽′ = 0.
Similarly b∧[𝛼′ ∨ (𝑐 ∧ 𝛽′)] = 0. Thus by the uniqueness of complements 𝑎 = [𝛼′ ∨
(𝑐 ∧ 𝛽′)]′ = 𝑏. Therefore, by Theorem (3.3.15) 𝐿 is distributive. The properties (2) and (3) in
the above theorem are often referred to as the de Morgan laws.

Theorem3.5.9: (Von Neumann) Every uniquely complemented modular lattice is distributive.

For the proof of the theorem we need the following lemma.


Lemma: If a uniquely complemented lattice 𝐿 is modular, then in it
(1) 𝑥 ∧ 𝑦 = 0 implies 𝑥 ≤ 𝑦 ′
(2) 𝑥 < 𝑦 implies 𝑥 ∨ (𝑥 ′ ∧ 𝑦) = 𝑦.

Proof of the lemma: we will show that if 𝑥 ∧ 𝑦 = 0 then 𝑥 ∨ (𝑥 ∨ 𝑦 ′ ) is the complement of 𝑦.


Observe that [(𝑥 ∨ 𝑦) ∧ (𝑥 ∨ (𝑥 ∨ 𝑦 ′ )] = 𝑥 ∨ [(𝑥 ∨ 𝑦) ∧ (𝑥 ∨ 𝑦)]′ = 𝑥 ∨ 0 = 𝑥, whence
[(𝑥 ∨ (𝑥 ∨ 𝑦 ′ )) ∧ 𝑦] = [(𝑥 ∨ (𝑥 ∨ 𝑦 ′ )] ∧ [(𝑥 ∨ 𝑦) ∧ 𝑦]] = 𝑥 ∧ 𝑦 = 0.
But 𝑥 ∨ (𝑥 ∨ 𝑦 ′ ) ∨ 𝑦 = (𝑥 ∨ 𝑦) ∨ (𝑥 ∨ 𝑦)′ = 1, therefore by the uniqueness of the complement
𝑥 ∨ (𝑥 ∨ 𝑦)′ = 𝑦′ and hence 𝑥 ≤ 𝑦′.
(2) If 𝑥 < 𝑦 then 𝑥′ ∧ 𝑦 ≠ 0, for if 𝑥 ′ ∧ 𝑦 = 0, then 𝑥 = 𝑥 ′′ = 𝑦 which is a contradiction. If
we assume 𝑥 ∨ (𝑥 ′ ∧ 𝑦) = 𝑧 < 𝑦, then we obtain 𝑧′ ∧ 𝑦 ≠ 0 and since 𝑧 ′ ∧ 𝑥 ≤ 𝑧 ′ ∧ 𝑧 = 0.
Thus it follows from (1) that
𝑧′ ≤ 𝑥′, hence 𝑧′ ∧ 𝑦 ≤ 𝑧, a contradiction.
Proof of the Theorem: Suppose a uniquely complemented lattice 𝐿 is modular. We show that it
contains no lattice isomorphic to 𝑀3 (the diamond). The proof is by contradiction. Suppose
there exists in 𝐿 a sublattice of the form

(a) Suppose first that 𝑢 = 0 then 𝑥 ∧ 𝑦 = 0 and 𝑥 ∧ 𝑧 = 0, so by above lemma 𝑥 ≤ 𝑣 = 𝑦 ∨


𝑧 ≤ 𝑥′. Which is not possible.
(b) Now suppose that 𝑢 ≠ 0. Let 𝑥 ⋆ = 𝑢′ ∧ 𝑥, 𝑦 ⋆ = 𝑢′ ∧ 𝑦 𝑧 ⋆ = 𝑢′ ∧ 𝑧 then
𝑥 ⋆ ∧ 𝑦 ⋆ = (𝑢′ ∧ 𝑥) ∧ (𝑢′ ∧ 𝑦)
= (𝑢′ ∧ 𝑥) ∧ 𝑦 = 𝑢′ ∧ (𝑥 ∧ 𝑦) = 𝑢′ ∧ 𝑢 = 0
Similarly 𝑦 ⋆ ∧ 𝑧 ⋆ = 𝑧 ⋆ ∧ 𝑥 ⋆ = 0.
Consider 𝑢 ∨ 𝑣 ′ on the one hand, we have
𝑢 ∨ 𝑣 ′ ∨ (𝑥 ⋆ ∨ 𝑦 ⋆ ) = 𝑢 ∨ 𝑣 ′ ∨ (𝑢′ ∧ 𝑥) ∨ (𝑢′ ∧ 𝑦)
= 𝑣′ ∨ 𝑥 ∨ 𝑦 (by modularity)
=1
Hence 𝑥 ⋆ ∨ 𝑦 ⋆ is the complement of 𝑢 ∨ 𝑣 ′ . Similarly so also are 𝑦 ∗ ∨ 𝑧 ∗ and 𝑧 ∗ ∨ 𝑥 ∗ . Hence
we have (𝑥 ⋆ ∨ 𝑦 ⋆ ) = (𝑦 ∗ ∨ 𝑧 ∗ ) = (𝑧 ∗ ∨ 𝑥 ∗ ). This then takes us back to the situation (a) and the
subsequent contradiction completes the proof.
Chapter 4
Boolean Rings and Boolean Algebras
This chapter is aimed at to provide knowledge about Boolean rings and Boolean algebras. This
chapter consists of 4 sections. Section 4.1 is based on Boolean algebras and isomorphism of
Boolean algebras. In section 4.2 we discuss Boolean rings with the help of some examples and
in section 4.3 we look at the comparison of Boolean rings and Boolean algebras and discuss
the technique how Boolean ring can be converted into Boolean algebra and conversely. This
section ends with some important results on how Boolean rings are converted into Boolean
algebras and conversely. In section 4.4 we discuss some important applications of Boolean
algebras.

4.1 Boolean Algebras


In this section we shall discuss Boolean algebras in detail with some examples. Also we shall
discuss briefly isomorphism between Boolean algebras. This section ends with few important
results on Boolean algebras.
Let us investigate the example of the power set ℙ(𝑋), of a set 𝑋 more closely. The power set
is a lattice that is ordered by inclusion. By the definition of the power set, the largest element
in ℙ(X) is 𝑋 itself and the smallest element is ∅, the empty set. For any set 𝐴 in ℙ(𝑋), we know
that 𝐴 ∩ 𝑋 = 𝐴and 𝐴 ∪ ∅ = 𝐴. This suggests the following definition for lattices. An element
1in a poset 𝑋 is a top element if 𝑎 ≤ 1for all 𝑎 ∈ 𝑋. An element 0 is a bottom element of
𝑋 if0 ≤ 𝑎 for all 𝑎 ∈ 𝑋.
Let 𝐴 be in ℙ(𝑋). Recall that the complement of A is
𝐴´ = 𝑋 ∖ 𝐴 = {𝑥 ∶ 𝑥 ∈ 𝑋 and 𝑥 ∉ 𝑋}.
We know that 𝐴 ∪ 𝐴´ = 𝑋 Sand 𝐴 ∩ 𝐴′ = ∅. We can generalize this example for lattices In a
lattice 𝐿 the binary operations ∨and ∧satisfy commutative and associative laws;
however, they need not satisfy the distributive law
𝑎 ∧ (𝑏 ∨ 𝑐) = (𝑎 ∧ 𝑏) ∨ (𝑎 ∧ 𝑐);
however in 𝑃(𝑋) the distributive law is satisfied since
𝐴 ∩ (𝐵 ∪ 𝐶) = (𝐴 ∩ 𝐵) ∪ (𝐴 ∩ 𝐶)
for 𝐴, 𝐵, 𝐶 ∈ 𝑃(𝑋) and we know that a lattice 𝐿 is distributive if and only if the following
distributive law holds;
𝑎 ∧ (𝑏 ∨ 𝑐) = (𝑎 ∧ 𝑏) ∨ (𝑎 ∧ 𝑐)
for all 𝑎, 𝑏, 𝑐 ∈ 𝐿.

Definition 4.1.1: A Boolean algebra is a lattice 𝐵 with a greatest element 1and a smallest
element 0 such that 𝐵is both distributive and complemented.

Example 4.1.2: The power set of 𝑋, ℙ(𝑋) is our prototype for a Boolean algebra. As it turns
out it is also one of the most important Boolean algebras. The following theorem allows us to
characterize Boolean algebras in terms of the binary relations ∨ and ∧ without mention of the
fact that a Boolean algebra is a poset.
The next result proves that the Boolean algebra is an algebraic structure with respect to the
operation of join and meet.

Theorem 4.1.3: A set 𝐵 is a Boolean algebra if and only if there exist binary operations ∨ and
∧ on 𝐵 satisfying the following axioms.
(1) 𝑎 ∨ 𝑏 = 𝑏 ∨ 𝑎 and 𝑎 ∧ 𝑏 = 𝑏 ∧ 𝑎 for all 𝑎, 𝑏 ∈ 𝐵,
(2) 𝑎 ∨ (𝑏 ∨ 𝑐) = (𝑎 ∨ 𝑏) ∨ 𝑐and 𝑎 ∧ (𝑏 ∧ 𝑐) = (𝑎 ∧ 𝑏) ∧ 𝑐for𝑎, 𝑏, 𝑐 ∈ 𝐵,
(3) 𝑎 ∧ (𝑏 ∨ 𝑐) = (𝑎 ∧ 𝑏) ∨ (𝑎 ∧ 𝑐) and 𝑎 ∨ (𝑏 ∧ 𝑐) = (𝑎 ∨ 𝑏) ∧ (𝑎 ∨ 𝑐) for all 𝑎, 𝑏, 𝑐 ∈ 𝐵,
(4) There exist elements 1and0 such that 𝑎 ∨ 0 = 𝑎 and 𝑎 ∧ 1 = 𝑎 for all 𝑎 ∈ 𝐵,
for every 𝑎 ∈ 𝐵 there exists an 𝑎´ ∈ 𝐵 such that 𝑎 ∨ 𝑎´ = 1 and 𝑎 ∧ 𝑎´ = 0.

Proof: Let 𝐵 be a set satisfying (1)-(5) in the theorem. We show that 𝐵 is a Boolean algebra.
One of the idempotent laws is satisfied since
𝑎 =𝑎∨0 (since 0 ≤ 𝑎)
= 𝑎 ∨ (𝑎 ∧ 𝑎´) (using (5))
= (𝑎 ∨ 𝑎) ∧ (𝑎 ∨ 𝑎´) (using (3))
= (𝑎 ∨ 𝑎) ∧ 1 (using (5))
= 𝑎 ∨ 𝑎. (using (4))
Observe that,
1 ∨ 𝑏 = (1 ∨ 𝑏) ∧ 1 = (1 ∧ 1) ∨ (𝑏 ∧ 1) = 1 ∨ 1 = 1.
Consequently, the first of the two absorption laws holds, since
𝑎 ∨ (𝑎 ∧ 𝑏) = (𝑎 ∧ 1) ∨ (𝑎 ∧ 𝑏)
= 𝑎 ∧ (1 ∨ 𝑏)
=𝑎∧1 (using (4))
= 𝑎.
The other idempotent and absorption laws are proven similarly. Since 𝐵 also satisfies (1)-(3),
the conditions of Theorem 2.1.6 and 2.1.7 are met, therefore 𝐵 must be a lattice. Condition (4)
tells us that 𝐵 is a distributive lattice.
For 𝑎 ∈ 𝐵, 0 ∨ 𝑎 = 𝑎, hence 0 ≤ 𝑎 and 0 is the bottom element in 𝐵. To show that 1 is the
top element in 𝐵, we will first show that 𝑎 ∨ 𝑏 = 𝑏 is equivalent to 𝑎 ∧ 𝑏 = 𝑎. Since 𝑎 ∨ 1 = 𝑎
for all 𝑎 ∈ 𝐵, using the absorption laws we can determine that
𝑎 ∨ 1 = (𝑎 ∧ 1) ∨ 1 = 1 ∨ (1 ∧ 𝑎) = 1
or 𝑎 ≤ 1 for all 𝑎 in 𝐵. Finally, since we know that 𝐵 is complemented by (5), 𝐵 must be a
Boolean algebra.
Conversely suppose that 𝐵 is Boolean algebra. Let 1 and 0 be the greatest and least elements
in 𝐵 respectively. If we define 𝑎 ∨ 𝑏 and 𝑎 ∧ 𝑏 as least upper and greatest lower bounds of
{𝑎, 𝑏}, then 𝐵 is a lattice by Theorem 2.1.6 and 2.1.7, definition of distributive lattice and our
hypothesis.

Many other identities hold in Boolean algebras. Some of these are listed in the following
theorem.

Theorem 4.1.4: Let 𝐵 be a Boolean algebra then


(a) 𝑎 ∨ 1 = 1 and 𝑎 ∧ 0 = 0 for all 𝑎 ∈ 𝐵
(b) If 𝑎 ∨ 𝑏 = 𝑎 ∨ 𝑐 and 𝑎 ∧ 𝑏 = 𝑎 ∧ 𝑐 for 𝑎, 𝑏, 𝑐 ∈ 𝐵 then 𝑏 = 𝑐
(c) (𝑎´)´ = 1 for all 𝑎 ∈ 𝐵
(d) 1´ = 0 and 0´ = 1.

Proof: (a) We know that 𝑎 ∨ 𝑏 = sup {𝑎, 𝑏} then:


𝑎 ∨ 1 = sup {𝑎, 1}
= 1. (1 being the top element in 𝐵)
Also we know that 𝑎 ∧ 𝑏 = inf {𝑎, 𝑏} then:
𝑎 ∧ 0 = 𝑖𝑛𝑓 {𝑎, 0}
= 0. (0 being the least element in 𝐵)
(b) For 𝑎 ∨ 𝑏 = 𝑎 ∨ 𝑐 and 𝑎 ∧ 𝑏 = 𝑎 ∧ 𝑐, we have
𝑏 = 𝑏 ∨ (𝑏 ∧ 𝑎)
= 𝑏 ∨ (𝑎 ∧ 𝑏) (by (1) of Theorem 4.1.3)
= 𝑏 ∨ (𝑎 ∨ 𝑐) (by given hypothesis)
= (𝑏 ∨ 𝑎) ∧ (𝑏 ∨ 𝑐) (by (3) of Theorem 4.1.3)
= (𝑎 ∨ 𝑏) ∧ (𝑏 ∨ 𝑐) (since join operation is
commutative)
= (𝑎 ∨ 𝑐) ∧ (𝑏 ∨ 𝑐) (by given hypothesis)
= (𝑐 ∨ 𝑎) ∧ (𝑐 ∨ 𝑏) (by (1) of Theorem 4.1.3)
= 𝑐 ∨ (𝑎 ∧ 𝑏) (by (3) of Theorem 4.1.3)
= 𝑐 ∨ (𝑎 ∧ 𝑐) (by given hypothesis)
= 𝑐 ∨ (𝑐 ∧ 𝑎) (by (1) of Theorem 4.1.3)
= 𝑐.
(c) We know that 𝑎 = 𝑎 ∨ 0 then:
𝑎´ = (𝑎 ∨ 0)´
= 𝑎´ ∧ 0´ (by de Morgan law)
= 𝑎´ ∧ 1 (using (d)).
Therefore we have (𝑎´)´ = (𝑎´ ∧ 1)´
= 𝑎 ∨ 1´ (by de Morgan law)
=𝑎∨0 (by (d))
=𝑎 (by Theorem 4.1.3 (4)).
(d) We know that 1 = 𝑎 ∨ 𝑎´, therefore
1´ = (𝑎 ∨ 𝑎´)´
= 𝑎´ ∧ 𝑎 (by de Morgan law)
= 𝑎 ∧ 𝑎´ (by Theorem 4.1.3 (1))
= 0 (by Theorem 4.1.3 (5)).
Also we know that 0 = 𝑎 ∧ 𝑎´therefore;
0´ = ( 𝑎 ∧ 𝑎´)´
= 𝑎´ ∨ 𝑎 (by de Morgan law, Theorem 4.1.4 (1))
= 𝑎 ∨ 𝑎´ (by Theorem 4.1.3 (1))
=1 (by Theorem 4.1.3 (5)).

Theorem 4.1.5: If 𝐵 is a Boolean algebra then


(1) [de Morgan laws] for all 𝑥, 𝑦 ∈ 𝐵, (𝑥 ∧ 𝑦)´ = 𝑥´ ∨ 𝑦´, (𝑥 ∨ 𝑦)´ = 𝑥´ ∧ 𝑦´;
(2) For all 𝑥, 𝑦 ∈ 𝐵𝑥 ≤ 𝑦 if and only if 𝑥´ ≥ 𝑦´;
(3) For all 𝑥, 𝑦, 𝑧 ∈ 𝐵 𝑥 ∧ 𝑦 ≤ 𝑧 if and only if 𝑥 ≤ 𝑧 ∨ 𝑦´;
(4) For all 𝑥, 𝑦, 𝑧 ∈ 𝐵 𝑥 ∨ 𝑦 ≥ 𝑧 if and only if 𝑥 ≥ 𝑧 ∧ 𝑦´.

Proof: (1) Observe that, by distributivity of 𝐵


(𝑥 ∧ 𝑦) ∨ 𝑥´ ∨ 𝑦´ = (𝑥 ∨ 𝑥´ ∨ 𝑦´) ∧ (𝑦 ∨ 𝑥´ ∨ 𝑦´) = (1 ∨ 𝑦´) ∧ (1 ∨ 𝑥´) = 1 ∧ 1 = 1;
𝑥 ∧ 𝑦 ∧ (𝑥´ ∨ 𝑦´) = (𝑥 ∧ 𝑦 ∧ 𝑥´) ∨ (𝑥 ∧ 𝑦 ∧ 𝑦´) = (0 ∧ 𝑦) ∨ (0 ∧ 𝑥) = 0 ∨ 0 = 0
this shows that the (unique) complement of 𝑥 ∧ 𝑦 is 𝑥´ ∨ 𝑦´. Similarly, we can establish the
other law.
(2) By (1) we have 𝑥 ≤ 𝑦 if and only if 𝑥 ∧ 𝑦 = 𝑥 if and only if 𝑥´ ∨ 𝑦´ = 𝑥´ if and only if 𝑦´ ≤
𝑥´.
(3) If 𝑥 ∧ 𝑦 ≤ 𝑧 then 𝑧 ∨ 𝑦´ ≥ (𝑥 ∧ 𝑦) ∨ 𝑦´ = (𝑥 ∨ 𝑦´) ∧ (𝑦 ∨ 𝑦´) = (𝑥 ∨ 𝑦´) ∧ 1 = (𝑥 ∨ 𝑦´) ≥
𝑥 (by distributivity and (5) of Theorem 4.1.2). Also if 𝑥 ≤ 𝑧 ∨ 𝑦´ then;
𝑥 ∧ 𝑦 ≤ (𝑧 ∨ 𝑦´) ∧ 𝑦 = (𝑧 ∧ 𝑦) ∨ (𝑦´ ∧ 𝑦) = (𝑧 ∧ 𝑦) ∨ 0 = 𝑧 ∧ 𝑦 ≤ 𝑧 (by distributivity and
(5) of Theorem 4.1.2).
(4) This is clearly the dual of (3) and it follows by the duality principle.

Definition 4.1.6: When the underlying set 𝐵 is empty, the resulting algebra is degenerate in
the sense that it has just one element. In this case, the operations of join, meet, and
complementation are all constant, and 0 = 1. The simplest non-degenerate Boolean algebra is
discussed in example below;

Example 4.1.7: Class of all subsets of a one-element set which has just two elements, 0 (the
empty set) and 1 (the one-element set) forms non-degenerate Boolean algebra under the
operations of join and meet described by the following arithmetic tables;
˅ 0 1 ˄ 0 1
and

0 0 1 0 0 0

1 1 1 1 0 1

and complementation is the unary operation that maps 0 to 1, and conversely.

Notation: By 𝑅 𝑋 we shall denote the set of all functions from 𝑋 into 𝑅 (as discussed in example
4.2.) throughout.

Example 4.1.8: The set 𝑅 𝑋 forms Boolean algebra as it clearly satisfies the axioms of Theorem
4.1.2.

Isomorphism of Boolean Algebras


Definition 4.1.9: A function 𝜑 is called an isomorphism from a Boolean algebra 𝐵 =
(𝐵,∧𝐵 ,∨𝐵 , 0𝐵 , 1𝐵 ) into a Boolean algebra 𝐶 = (𝐶,∧𝐶 ,∨𝐶 , 0𝐶 , 1𝐶 ) if and only if
(a) 𝜑 is a one-one function from 𝐵 into 𝐶,
(b) for any 𝑥, 𝑦 in 𝐵,
𝜑(𝑥 ∧𝐵 𝑦) = 𝜑(𝑥) ∧𝐶 𝜑(𝑦)
𝜑(𝑥 ∨𝐵 𝑦) = 𝜑(𝑥) ∨𝐶 𝜑(𝑦)
𝜑(𝑥’𝐵 ) = (𝜑(𝑥))’𝐶 .
Such a function 𝜑 is called an isomorphism from 𝐵 onto 𝐶 if in addition 𝜑 is a function from
𝐵 onto 𝐶.

Theorem 4.1.10: Let 𝜑 be an isomorphism from a Boolean algebra 𝐵 into (respectively, onto)
a Boolean algebra 𝐶(with the notation given above) then:
(a) (𝜑0𝐵 ) = 0𝐶 and (1𝜑𝐵 ) = 1𝐶 .
(b) It is not necessary to assume that:
𝜑(𝑥 ∨𝐵 𝑦) = 𝜑(𝑥) ∨𝐶 (𝜑𝑦) for all 𝑥, 𝑦 in 𝐵
Alternatively, we could omit the assumption that:
(𝜑(𝑥) ∧𝐵 𝑦) = 𝜑(𝑥) ∧𝐶 𝜑(𝑦).
(c) If 𝜓 is an isomorphism from 𝐶 into (respectively, onto) a Boolean algebra 𝐷 =
(𝐷,∧𝐷 ,∨𝐷 , 0, 1𝐷 ) then the composite mapping 𝜓 ∘ 𝜑 is an isomorphism from 𝐵 into
(respectively, onto) 𝐷.
(d) The inverse mapping 𝜑 −1 is an isomorphism from the subalgebra of 𝐶 determined by 𝜑[ℬ]
onto 𝐵 and in particular if 𝜑 is onto 𝐶, then 𝜑 −1 is an isomorphism from 𝐶 onto ℬ.

Proof : (a) 𝜑(0𝐵 ) = 𝜑 (𝑥 ∧𝐵 𝑥𝐵′ ) = 𝜑(𝑥) ∧𝐶 𝜑(𝑥′𝐵 ) = 𝜑(𝑥) ∧𝐶 (𝜑(𝑥))’𝐶 = 0𝐶 and


𝜑(1𝐵 ) = 𝜑(0’𝐵 ) = (𝜑(0𝐵 ))’𝐶 = 0’𝐶 = 1𝐶 .
(b) 𝜑(𝑥 ∨𝐵 𝑦) = 𝜑((𝑥’𝐵 ∧𝐵 𝑦’𝐵 )’𝐵 ) = (𝜑(𝑥’𝐵 ∧𝐵 𝑦’𝐵 ))’𝐶
= (𝜑(𝑥’𝐵 ) ∧𝐶 𝜑(𝑦𝐵 ))’𝐵
= ((𝜑(𝑥))’𝐶 ∧𝐶 (𝜑(𝑦))’𝐶 ))’𝐶
= 𝜑(𝑥) ∨𝐶 𝜑(𝑦).

(c) First, 𝜓 ∘ 𝜑 is one-one (if 𝑥 ≠ 𝑦, then 𝜑(𝑥) ≠ 𝜑(𝑦) and therefore (𝜑(𝑥)) ≠ 𝜓(𝜑(𝑦)). )

Second (𝜓 ∘ 𝜑)(𝑥’𝐵 ) = 𝜓(𝜑(𝑥’𝐵 )) = (𝜓(𝜑(𝑥))) ’𝐷 .


Lastly (𝜓 ∘ 𝛷)( 𝑥 ∧𝐵 𝑦) = 𝜓(𝜑(𝑥 ∧𝐵 𝑦))
= 𝜓𝜑(𝑥) ∧𝐶 𝜑(𝑦))
= 𝜓(𝜑(𝑥)) ∧𝐷 𝜓(𝜑(𝑦))
= (𝜓 ∘ 𝜑)(𝑥) ∧𝐷 (𝜓 ∘ 𝜑)(𝑦).

(d) Assume 𝑧 ∈ 𝜑[ℬ]. Then 𝑧 = 𝜑(𝑥) and 𝑤 = 𝜑(𝑦) for some 𝑥 and 𝑦 in 𝐵. Hence 𝑥 =
𝜑 −1 (𝑧) and 𝑦 = 𝜑 −1 (𝑤). First, if 𝑧 ≠ 𝑤, then 𝑥 ≠ 𝑦 (for if 𝑥 = 𝑦, then 𝑧 = 𝜑(𝑥) = 𝜑(𝑦)
= 𝑤). Thus 𝜑 −1 is one-one. Second, 𝜑(𝑥 ∨𝐵 𝑦) = 𝜑(𝑥) ∨𝐶 𝜑(𝑦) = 𝑧 ∨𝐶 𝑤. Hence

𝜑 −1 (𝑧 ∨𝐶 𝑤) = 𝑥 ∨𝐵 𝑦 = 𝜑 −1 (𝑧) ∨𝐵 𝛷 −1 (𝑤). Thirdly we have (𝑥’𝐵 ) = (𝜑(𝑥)) 𝐶 =
𝑧’𝐶 . 𝜑 −1 (𝑧’𝐶 ) = (𝜑 −1 (𝑧))’ 𝐵 .

We say that 𝐵 is isomorphic with 𝐶 if and only if there is an isomorphism from 𝐵 onto 𝐶.
From Theorem 4.3.2(d, c) it follows that, if 𝐵 is isomorphic with 𝐶 then 𝐶 is isomorphic with
𝐵 and if in addition 𝐶 is isomorphic with 𝐷 then 𝐵 is isomorphic with 𝐷. Isomorphic Boolean
algebras have in a certain sense the same Boolean structure. More precisely, this means that
any property (formulated in the language of Boolean algebras) holding for one Boolean algebra
also holds for any isomorphic Boolean algebra.

Example 4.1.11: Boolean algebras 𝑃(𝑋) and 𝑅 𝑋 are isomorphic via the mapping that takes
each subset of 𝑋 to its characteristic function.

proof: Let 𝑓: 𝑃(𝑋) → 𝑅 𝑋 be a mapping defined by:


1 if 𝑥 ∈ 𝑃,
for all 𝑃 ∈ 𝑃(𝑋) 𝑓(𝑃) = 𝑝(𝑥) = {
0 if 𝑥 ∉ 𝑃.
We now show that 𝑓is isomorphism. Clearly f is one-one
Now𝑓(𝑃 ∨ 𝑄) = 𝑓(𝑃 ∪ 𝑄) = 1 if and only if 𝑥 ∈ 𝑃 ∪ 𝑄
if and only if 𝑥 ∈ 𝑃 or 𝑥 ∈ 𝑄
if and only if 𝑝(𝑥) = 1 and 𝑞(𝑥) = 1
if and only if 𝑝(𝑥) ≠ 𝑞(𝑥) or 𝑝(𝑥) = 𝑞(𝑥) = 1
if and only if 𝑝(𝑥) + 𝑞(𝑥) + 𝑝(𝑥)𝑞(𝑥) = 1
if and only if 𝑝(𝑥) ∨ 𝑞(𝑥) = 1
if and only if 𝑓(𝑃) ∨ 𝑓(𝑄) = 1.
This implies that 𝑓(𝑃 ∨ 𝑄) = 𝑓(𝑃) ∨ 𝑓(𝑄).
Also 𝑓(𝑃 ∨ 𝑄) = 𝑓(𝑃 ∪ 𝑄) = 0 if and only if 𝑥 ∉ 𝑃 ∪ 𝑄,
if and only if 𝑥 ∉ 𝑃 and 𝑥 ∉ 𝑄
if and only if 𝑝(𝑥) = 0 and 𝑞(𝑥) = 0
if and only if 𝑝(𝑥) + 𝑞(𝑥) + 𝑝(𝑥)𝑞(𝑥) = 0
if and only if 𝑓(𝑃) ∨ 𝑓(𝑄) = 0.
This also implies 𝑓(𝑃 ∨ 𝑄) = 𝑓(𝑃) ∨ 𝑓(𝑄). Thus join is preserved. Similarly, we can show
that meet is preserved.
0 if 𝑥 ∈ 𝑃
Also 𝑓(𝑃´) = 𝑝´(𝑥) = {
1 if 𝑥 ∉ 𝑃
= 𝑔(𝑃)´.
And clearly 𝑓(∅) = 0 and 𝑓(𝑋) = 1. Thus 𝑓is isomorphism.

Finite Boolean Algebras


Definition 4.1.12: A Boolean algebra is a finite Boolean algebra if it contains a finite number
of elements as set. Finite Boolean algebras are particularly nice since we can classify them up
to isomorphism.

Let 𝐵 and 𝐶 be Boolean algebras and we know that bijective map 𝜙: 𝐵 → 𝐶 is an


isomorphism of Boolean algebras if for all 𝑎 and 𝑏 in 𝐵
𝜙(𝑎 ∨ 𝑏) = 𝜙(𝑎) ∨ 𝜙(𝑏)
𝜙(𝑎 ∧ 𝑏) = 𝜙(𝑎) ∧ 𝜙(𝑏)

We will show that any finite Boolean algebra is isomorphic to the Boolean algebra obtained by
taking the power set of some finite set 𝑋. We will need a few lemmas anddefinitions before we
prove this result. Let 𝐵 be a finite Boolean algebra. Recall that an element𝑎 ∈ 𝐵is an atom of
𝐵 if 𝑎 ≠ 0and 𝑎 ∧ 𝑏 = 𝑎for all non-zero 𝑏 ∈ 𝐵. Equivalently𝑎is an atom of 𝐵if there is no non-
zero 𝑏 ∈ 𝐵 distinct from 𝑎such that 0 ≤ 𝑏 ≤ 𝑎.

Lemma 4.1.13: Let 𝐵 be a finite Boolean algebra. If 𝑏 is a non-zero element of 𝐵, then there
is an atom 𝑎 in 𝐵 such that 𝑎 ≤ 𝑏.

Proof: If 𝑏 is an atom, let 𝑎 = 𝑏. Otherwise, choose an element 𝑏1 , not equal to 0 or 𝑏 such


that 𝑏1 ≤ 𝑏. We are guaranteed that this is possible since b is not an atom. If 𝑏1 is an atom then
we are done. If not choose 𝑏2 not equal to 0 or 𝑏1 such that 𝑏2 ≤ 𝑏1 . Again if 𝑏2 is an atom, let
𝑎 = 𝑏2 . Continuing this process, we can obtain a chain
0 ≤ ⋯ ≤ 𝑏3 ≤ 𝑏2 ≤ 𝑏1 ≤ 𝑏.
Since 𝐵 is finite Boolean algebra this chain must be finite. That is for some 𝑘, 𝑏𝑘 is an atom
then let 𝑎 = 𝑏𝑘 .

Lemma 4.1.14: Let 𝑎 and 𝑏 be atoms in a finite Boolean algebra 𝐵 such that 𝑎 ≠ 𝑏. Then 𝑎 ∧
𝑏 = 0.

Proof: Since 𝑎 ∧ 𝑏 is the greatest lower bound of 𝑎 and 𝑏, we know that 𝑎 ∧ 𝑏 ≤ 𝑎. Hence
either 𝑎 ∧ 𝑏 = 𝑎or 𝑎 ∧ 𝑏 = 0. However if 𝑎 ∧ 𝑏 = 𝑎 then either 𝑎 ≤ 𝑏 or 𝑎 = 0. In either case
we have a contradiction because 𝑎 and 𝑏 are both atoms therefore 𝑎 ∧ 𝑏 = 0.

Lemma 4.1.15: Let 𝐵 be a Boolean algebra and 𝑎, 𝑏 ∈ 𝐵. Then following statements are
equivalent.
1. 𝑎 ≤ 𝑏,
2. 𝑎 ∧ 𝑏´ = 0,
3. 𝑎´ ∨ 𝑏 = 1.

Proof: (1) ⇒ (2): If 𝑎 ≤ 𝑏 then 𝑎 ∨ 𝑏 = 𝑏. Therefore,


𝑎 ∧ 𝑏´ = 𝑎 ∧ (𝑎 ∨ 𝑏)´
= 𝑎 ∧ (𝑎´ ∧ 𝑏´) (de Morgan law, Theorem 4.1.4 (1))
= (𝑎 ∧ 𝑎´) ∧ 𝑏´
= 0 ∧ 𝑏´ (by Theorem 4.1.2 (5))
= 0.
(2) ⇒ (3): If 𝑎 ∧ 𝑏´ = 0 then 𝑎´ ∨ 𝑏 = ( 𝑎 ∧ 𝑏´)´ = 0´ = 1
(3) ⇒ (1): If 𝑎´ ∨ 𝑏 = 1, then
𝑎 = 𝑎 ∧ (𝑎´ ∨ 𝑏)
= (𝑎 ∧ 𝑎´) ∨ (𝑎 ∧ 𝑏) (by Theorem 4.1.2 (3))
= 0 ∨ (𝑎 ∧ 𝑏) (by Theorem 4.1.2 (5))
= 𝑎 ∧ 𝑏.
Thus 𝑎 ≤ 𝑏.

Lemma 4.1.16: Let 𝐵 be a Boolean algebra and 𝑏 and 𝑐 be elements in 𝐵 such that 𝑏 ≰ 𝑐. Then
there exists an atom 𝑎 ∈ 𝐵 such that 𝑎 ≤ 𝑏 and 𝑎 ≰ 𝑐.

Proof: By lemma (4.1.16) 𝑏 ∧ 𝑐´ ≠ 0. Hence, there exists an atom 𝑎 such that 𝑎 ≤ 𝑏 ∧ 𝑐´.
Consequently 𝑎 ≤ 𝑏 and 𝑎 ≰ 𝑐.

Lemma 4.1.17: Let 𝑏 ∈ 𝐵 and 𝑎1 , … , 𝑎𝑛 be the atoms of 𝐵 such that𝑎𝑖 ≤ 𝑏 for all 𝑖 = 1, … , 𝑛.
Then 𝑏 = 𝑎1 ∨ … ∨ 𝑎𝑛 . Furthermore, if 𝑎, 𝑎1 , … , 𝑎𝑛 are atoms of 𝐵 such that 𝑎 ≤ 𝑏, 𝑎𝑖 ≤ 𝑏 for
all 𝑖 = 1, … , 𝑛 and 𝑏 = 𝑎 ∨ 𝑎1 ∨ … ∨ 𝑎𝑛 , then 𝑎 = 𝑎𝑖 for all 𝑖 = 1, … , 𝑛.

Proof: Let 𝑏1 = 𝑎1 ∨ … ∨ 𝑎𝑛 . Since 𝑎𝑖 ≤ 𝑏 for each 𝑖 and we know that 𝑏1 ≤ 𝑏. If we can show
that 𝑏 ≤ 𝑏1 then the lemma is true by antisymmetry. Assume that 𝑏 ≰ 𝑏1. Then there exists an
atom 𝑎 such that 𝑎 ≤ 𝑏 and 𝑎 ≰ 𝑏1. Since 𝑎 is an atom and 𝑎 ≤ 𝑏 we can deduce that 𝑎 = 𝑎𝑖
for some 𝑎𝑖 . However this is impossible since 𝑎 ≤ 𝑏1 . Therefore 𝑏 ≤ 𝑏1.

Now suppose that 𝑏 = 𝑎1 ∨ … ∨ 𝑎𝑛 . If 𝑎 is an atom less than 𝑏 then


𝑎 = 𝑎 ∧ 𝑏 = 𝑎 ∧ (𝑎1 ∨ … ∨ 𝑎𝑛 ) = (𝑎 ∧ 𝑎1 ) ∨ … ∨ (𝑎 ∧ 𝑎𝑛 ).
But each term is 0 or 𝑎 with 𝑎 ∧ 𝑎𝑖 occurring for only one 𝑎𝑖 . Hence by (lemma 4.1.15) 𝑎 = 𝑎𝑖
for some 𝑖.

Theorem 4.1.18: Let 𝐵 be a finite Boolean algebra. Then there exists a set 𝑋 such that 𝐵 is
isomorphic to 𝑃(𝑋).

Proof: We will show that 𝐵 is isomorphic to 𝑃(𝑋), where 𝑋 is the set of atoms of 𝐵. Let 𝑎 ∈ 𝐵.
By lemma 4.1.18, we can write 𝑎 uniquely as 𝑎 = 𝑎1 ∨ … ∨ 𝑎𝑛 for 𝑎1 , … , 𝑎𝑛 ∈ 𝑋. Consequently
we can define a map 𝜙: 𝐵 → 𝑃(𝑋) by
𝜙(𝑎) = 𝜙(𝑎1 ∨ … ∨ 𝑎𝑛 ) = {𝑎1 , … , 𝑎𝑛 }.
Clearly 𝜙 is onto.
Now let 𝑎 = 𝑎1 ∨ … ∨ 𝑎𝑛 and 𝑏 = 𝑏1 ∨ … ∨ 𝑏𝑚 be elements in 𝐵, where each 𝑎𝑖 and each 𝑏𝑖
is an atom. If 𝜙(𝑎) = 𝜙(𝑏) then {𝑎1 , … , 𝑎𝑛 } = {𝑏1 , … , 𝑏𝑚 } and 𝑎 = 𝑏. Consequently 𝜙 is
injective.
The join of 𝑎 and 𝑏 is preserved by 𝜙 since
𝜙(𝑎 ∨ 𝑏) = 𝜙(𝑎1 ∨ … ∨ 𝑎𝑛 ∨ 𝑏1 , … , 𝑏𝑚 ),
= {𝑎1 , … , 𝑎𝑛 , 𝑏1 , … , 𝑏𝑚 },
= {𝑎1 , … , 𝑎𝑛 } ∪ {𝑏1 , … , 𝑏𝑚 },
= 𝜙(𝑎1 ∨ … ∨ 𝑎𝑛 ) ∪ 𝜙(𝑏1 , … , 𝑏𝑚 ),
= 𝜙(𝑎) ∪ 𝜙(𝑏).
Similarly 𝜙(𝑎 ∧ 𝑏) = 𝜙(𝑎) ∩ 𝜙(𝑏). Thus meet and join are both preserved, hence the result
follows.

Corollary 4.1.19: If 𝐵 is a finite Boolean algebra then 𝐵 has 2𝑛 elements where 𝑛 is the number
of atoms in 𝐵.

Proof: From the theorem we have 𝐵 ≃ 𝑃(𝐸) for some finite set 𝐸. Without loss of generality
we may assume that 𝐸 = {1,2, … , 𝑛}. Let 2 denote the two-element chain 0 < 1 and consider
the mapping 𝑓: 𝑃(𝐸) ⟶ 𝟐𝒏 given by 𝑓(𝑋) = (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) where
1 if 𝑖 ∈ 𝑋;
𝑥𝑖 = {
0 otherwise.
Given 𝐴, 𝐵 ∈ 𝑃(𝐸), let 𝑓(𝐴) = (𝑎1 , 𝑎2 , . . . , 𝑎𝑛 ) and 𝑓 (𝐵) = (𝑏1 , 𝑏2 , . . . , 𝑏𝑛 ). Then we have
𝐴 ⊆ 𝐵 if and only if for every 𝑖, 𝑖 ∈ 𝐴 implies 𝑖 ∈ 𝐵, which is equivalent to 𝑎𝑖 = 1 implies 𝑏𝑖 =
1, that is to 𝑎𝑖 ≤ 𝑏𝑖 , which by the definition is equivalent to 𝑓(𝐴) ≤ 𝑓(𝐵). Moreover given
any 𝑥 = (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) ∈ 𝟐𝒏 we have 𝑓(𝐶) = 𝑥, where 𝐶 = {𝑖│𝑥𝑖 = 1}. It
therefore follows by (Theorem 1.5.2) that (𝐸) ≃ 𝟐𝒏 . Hence 𝐵 ≃ 𝟐𝒏 and so has 2𝑛 elements.
Moreover, since 𝐸 has 𝑛 elements, 𝐵 ≃ (𝐸) has 𝑛 atoms.
4.2 Boolean Rings
In this section we will discuss Boolean rings and we some of its types. Also in this section
we will look at some properties of Boolean rings. This section ends with some important results
on Boolean rings.

A ring is an abstract version of arithmetic, the kind of thing we studied in school. The prototype
is the ring of integers. It consists of a universe — the set of integers — and three operations on
the universe: the binary operations of addition and multiplication. There are also two
distinguished integers zero and one.

The set of integers satisfies a number of basic laws that are familiar from school mathematics;

The associative laws for addition and multiplication;

(1) 𝑝 + (𝑞 + 𝑟) = (𝑝 + 𝑞) + 𝑟 for all 𝑝, 𝑞, 𝑟 ∈ 𝑍;

(2) 𝑝 · (𝑞 · 𝑟) = (𝑝 · 𝑞) · 𝑟.

The commutative laws for addition and multiplication;

(3) 𝑝 + 𝑞 = 𝑞 + 𝑝;

(4) 𝑝 · 𝑞 = 𝑞 · 𝑝.

The identity laws for addition and multiplication;

(5) 𝑝 + 0 = 𝑝;

(6) 𝑝 · 1 = 𝑝.

The inverse law for addition;

(7) 𝑝 + (−𝑝) = 0.

and the distributive laws for multiplication over addition;

(8) 𝑝 · (𝑞 + 𝑟) = 𝑝 · 𝑞 + 𝑝 · 𝑟;

(9) (𝑞 + 𝑟) · 𝑝 = 𝑞 · 𝑝 + 𝑟 · 𝑝.

Any set (universe) satisfying above properties is named as ring where the difference between
the ring of integers and an arbitrary ring is that, in the latter, the universe may be an arbitrary
non-empty set of elements, not just a set of integers, and the operations take their arguments
and values from this set. The commutative law for multiplication is not required to hold in an
arbitrary ring; if it does, the ring is said to be commutative. Also, a ring is not always required
to have a unit, an element 1 satisfying (6); if it does, it is called a ring with unit.

There are other natural examples of rings besides the integers. The most trivial is the ring with
just one element in its universe: zero. It is called the degenerate ring. The simplest non-
degenerate ring with unit has just two elements, zero and one. The operations of addition and
multiplication are described by the following arithmetic tables:
+ 0 1 . 0 1

0 0 1 0 0 0

1 1 0 1 0 1

Remark 4.2.1: An examination of the above tables shows that the two-element ring has several
special properties. First of all, every element is its own additive inverse, that is:
(10) 𝑝 + 𝑝 = 0.

Therefore, the operation of negation is superfluous: every element is its own negative. Rings
satisfying condition (10) are said to have characteristic 2.
Second, every element is its own square, that is;
(11) 𝑝 · 𝑝 = 𝑝.
Element with this property are called idempotent. Rings with the property that every element
is idempotent have special name as defined below:

Definition 4.2.2: A ring 𝑅 with unit is said to be Boolean ring if every element of it is
idempotent.

Example 4.2.3: The two-element ring is the simplest non- degenerate example of Boolean ring.

The condition of idempotence in the definition of a Boolean ring has quite a strong influence
on the structure of such rings. Two of its most surprising consequences are proved in the next
proposition.

Definition 4.2.4: the characteristic of a ring 𝑅 is the least positive integer 𝑛 such that 𝑛𝑥 = 0
for all 𝑥 ∈ 𝑅.

Proposition 4.2.5: Let B be a Boolean ring then:


(a) Boolean ring always has characteristic 2.
(b) A Boolean ring is always commutative.

Proof: (a) Let 𝑝, 𝑞, 𝑟 ∈ 𝑅 (Boolean Ring).Then by definition:


(𝑝 + 𝑞) = (𝑝 + 𝑞)2.
Therefore (𝑝 + 𝑞) = (𝑝 + 𝑞)2 = 𝑝2 + 𝑞 2 + 𝑝𝑞 + 𝑞𝑝 = 𝑝 + 𝑞,
which implies 0 = 𝑝𝑞 + 𝑞𝑝 (by using (10) and (11)). (12)

(b) Put 𝑝 = 𝑞 in (12), we get0 = 𝑝2 + 𝑝2 = 𝑝 + 𝑝 (using (11)).


Assertion (a) implies that every element is its own negative so
(𝑝. 𝑞) = −(𝑝. 𝑞) (13)
Adding (12) and (13) we get
𝑝. 𝑞 = 𝑞. 𝑝 + 𝑝. 𝑞 + (−𝑝. 𝑞)
𝑝. 𝑞 = 𝑞. 𝑝 + 𝑝. 𝑞 – 𝑝. 𝑞
𝑝. 𝑞 = 𝑞. 𝑝 + 0
𝑝. 𝑞 = 𝑞. 𝑝.
Thus (b) holds hence the result follows.

Since as we know negation in Boolean rings is the identity operation it is not necessary to use
the minus sign for additive inverse of an element of the Boolean ring.
So in case of Boolean rings, only a slight modification in set of Ring axiom is needed, the
identity (7) should be replaced by (10). From now on the official axioms for a Boolean ring
are:
(1) − (3), (5), (6), (8) − (11).

Example 4.2.6: The universe of this example consists of ordered pairs (𝑝, 𝑞) of elements of
the universe as in example 4.2.2. Let S denotes the universe then;

𝑆 = {(0,0), (0,1), (1,0), (1,1)}.


To add or multiply two pairs in S, just add or multiply the corresponding coordinates as in 𝑅
(the universe in example 4.2.2) that is:
(a)(𝑝0 , 𝑝1 ) + (𝑞0 , 𝑞1 ) = (𝑝0 + 𝑞0 , 𝑝1 + 𝑞1 ),
(b) (𝑝0 , 𝑝1 ). (𝑞0 , 𝑞1 ) = (𝑝0 𝑞0 , 𝑝1 . 𝑞1 ).

These equations make sense because their right sides refer to the elements and operations of 𝑅.
The Zero and unit of the ring are the pairs (0,0) and (1,1).
It is a simple matter to check that the axioms for Boolean rings are true in S. In each case, the
verification of the axioms reduces to its validity in 𝑅.

Example 4.2.7: The preceding examples can easily be generalized to each positive integer 𝑛.
The universe in this case is the set of 𝑛-termed sequences (𝑝0 , 𝑝1 , . . . , 𝑝𝑛−1 ) of element from
universe as in example 4.2.2. The sum and the product of two such 𝑛-tupples defined
coordinate-wise, just as in the case of ordered pairs as in example 4.2.4.
(𝑝0 , 𝑝1 , . . . 𝑝𝑛 ) + (𝑞0 , 𝑞1 , . . . , 𝑞𝑛−1 ) = (𝑝0 + 𝑞0 , 𝑝1 + 𝑞1 , . . . , 𝑝𝑛−1 + 𝑞𝑛−1
(𝑝0 , 𝑝1 , . . . 𝑝𝑛 ). (𝑞0 , 𝑞1 , . . . , 𝑞𝑛−1 ) = (𝑝0 . 𝑞0 , 𝑝1 . 𝑞1 , . . . , 𝑝𝑛−1 . 𝑞𝑛−1 ).
The zero and unit are the n-tuples (0, 0, . . . , 0) and (1, 1, . . . , 1)

Example 4.2.8: Let ‘𝑋’ be an arbitrary set and 𝑅 𝑋 , the set of all functions from ‘𝑋’ into 𝑅 as
defined in example 4.2.2. The elements of 𝑅 𝑋 will be called 2- valued functions on 𝑋. The
distinguished elements and the operations of 𝑅 𝑋 are defined point-wise.
This means that 0 and 1 in 𝑅 𝑋 are the constant functions defined for each 𝑥 in 𝑋 by;
0(𝑥) = 0 and 1(𝑥) = 1
Then the functions 𝑝 + 𝑞 and 𝑝. 𝑞 are defined by:
𝑝 + 𝑞(𝑥) = 𝑝(𝑥) + 𝑞(𝑥).
𝑝. 𝑞(𝑥) = 𝑝(𝑥) . 𝑞(𝑥).
The above equations make sense as their right hand sides refer to elements and operations of
𝑅 as defined in example 4.2.2.

Verifying that 𝑅 𝑋 is a Boolean ring is conceptually the same as verifying that 𝑆 (as in example
4.2.4) is a Boolean ring, but notationally it looks a bit different. Consider as an example The
verification of the distributive law (8). In the context of 𝑅 𝑋 , the left and right sides of (8)
denote functions from 𝑋 into 𝑅. It must be shown that these two functions are equal. They
obviously have the same domain 𝑋, so it suffices to check that the values of the two functions
at each element 𝑥 in the domain agree that is
{𝑝. (𝑞 + 𝑟)} (𝑥) = (𝑝. 𝑞 + 𝑝. 𝑟) (𝑥) (14)
The left and right sides of (14) evaluate to
𝑝(𝑥). {𝑞(𝑥) + 𝑟(𝑥)} and 𝑝(𝑥) . 𝑞(𝑥) + 𝑝(𝑥). 𝑟(𝑥) (15)
respectively, by the definitions of addition and multiplication in 𝑅 𝑋 . Each of these terms
denotes an element of 𝑅. Since, the distributive law holds in 𝑅, the terms in (15) are equal.
Therefore the equation (14) is true. The other Boolean axioms are verified for 𝑅 𝑋 in similar
fashion.

The next result shows that in an arbitrary ring some additional properties hold than those of
(1)-(10) as mentioned below:

Theorem 4.2.9: Prove that in an arbitrary ring,


𝑝 ∙ 0 = 0 ∙ 𝑝 = 0 and 𝑝 ∙ (−𝑞) = (−𝑝) ∙ 𝑞 = −(𝑝 ∙ 𝑞)

Proof: We can write 𝑝 ∙ 0 = 𝑝(0 + 0),


=𝑝∙0+𝑝∙0
or 0 = 𝑝 ∙ 0 (by left cancellation law).
Similarly 0 ∙ 𝑝 = 0.
Thus 𝑝 ∙ 0 = 0 ∙ 𝑝 = 0.
Now 𝑝(−𝑞) + 𝑝 ∙ 𝑞 = 𝑝 ∙ (−𝑞 + 𝑞) by (8)
= 𝑝 ∙ (0)
=0
this implies 𝑝 ∙ (−𝑞) = −(𝑝 ∙ 𝑞).
Similarly (−𝑝) ∙ 𝑞 = −(𝑝 ∙ 𝑞).

Definition 4.2.10: A Boolean group 𝐵 is a group in which every element has order two (in
other words the law (10) is valid, that is for all 𝑝 ∈ 𝐵 𝑝 + 𝑝 = 0).

Theorem 4.2.11: Every Boolean group 𝐵 is commutative (that is, the commutative law (3) is
valid).

Proof: Since 𝐵satisfies law (10), then we can write;


(𝑝 + 𝑞) + (𝑝 + 𝑞) = 0 for all 𝑝, 𝑞 ∈ 𝐵. (i)
Now 𝑝 + 𝑞 = 𝑝 + 0 + 𝑞, then by (i) we have;
𝑝 + 𝑞 = 𝑝 + (𝑝 + 𝑞) + (𝑝 + 𝑞) + 𝑞
= (𝑝 + 𝑝) + (𝑞 + 𝑝) + (𝑞 + 𝑞) by (1)
= 0 + (𝑞 + 𝑝) + 0
= 𝑞 + 𝑝.
Thus 𝐵 is commutative.

Definition 4.2.12: A zero divisor in a ring is a non-zero element 𝑝 such that 𝑝 ∙ 𝑞 = 0 for some
non-zero element 𝑞.

Theorem 4.2.13: Boolean ring with or without unit having more than two elements has zero
divisors.

Proof: Let 𝐵 be a Boolean ring having more than two elements, then there exist two distinct
non-zero elements 𝑥, 𝑦 ∈ 𝐵 then:
𝑥 + 𝑦 ≠ 0 as 𝑥 ≠ 𝑦.
Now we have following cases to consider:
Case 1: If 𝑥 ∙ 𝑦 = 0, then we are done.
Case 2: If 𝑥 ∙ 𝑦 ≠ 0, then;

(𝑥 ∙ 𝑦) ∙ (𝑥 + 𝑦) = 𝑥 ∙ 𝑦 ∙ 𝑥 + 𝑥 ∙ 𝑦 ∙ 𝑦
= 𝑥2 ∙ 𝑦 + 𝑥 ∙ 𝑦2
=𝑥∙𝑦+𝑥∙𝑦 (using 11),
= 0.
Hence 𝑥 ∙ 𝑦 is zero-divisor in this case. Thus the result follows.

4.3 Boolean Algebras Versus Rings


This is the last section of this unit. In this section our focus will be mainly on how Boolean
rings can be converted into Boolean algebras and Boolean algebras into Boolean rings. This
section ends with some important results on how Boolean rings are converted into Boolean
algebras and conversely.

Boolean rings and Boolean algebras:


The theories of Boolean algebras and Boolean rings are very closely related; in fact, they are
just different ways of looking at the same subject. More precisely every Boolean algebra can
be turned into a Boolean ring by defining appropriate operations of addition and multiplication
and conversely every Boolean ring can be turned into a Boolean algebra by defining
appropriate operations of join, meet and complement. We discuss this next as follows:

How to convert Boolean rings into Boolean algebras and conversely?


We will discuss the general technique how to convert every Boolean ring into Boolean algebra
and conversely.

Motivated by this set-theoretic example, we can introduce into every Boolean algebra
𝐴 operations of addition and multiplication very much like symmetric difference and
intersection; just define;
(3) 𝑝 + 𝑞 = (𝑝 ∧ 𝑞) ∨ (𝑝 ∧ 𝑞) and 𝑝・𝑞 = 𝑝 ∧ 𝑞.
Under these operations, together with 0 and 1 (the zero and unit of the Boolean algebra),
𝐴becomes a Boolean ring. Conversely, every Boolean ring can be turned into a Boolean algebra
with the same zero and unit; just define operations of join, meet, and complement by;
(4) 𝑝 ∨ 𝑞 = 𝑝 + 𝑞 + 𝑝・𝑞, 𝑝 ∧ 𝑞 = 𝑝・𝑞 and 𝑝’ = 𝑝 + 1.
Start with a Boolean algebra, turn it into a Boolean ring (with the same zero and unit) using the
definitions in (3), and then convert the ring into a Boolean algebra using the definitions in (4);
the result is the original Boolean algebra. Conversely start with a Boolean ring, convert it into
a Boolean algebra using the definitions in (4) and then convert the Boolean algebra into a
Boolean ring using the definitions in (3); the result is the original ring.

Theorem 4.3.1: Let (𝐵; ∧,∨, ´) be a Boolean algebra. Define a multiplication and an addition
on 𝐵 by setting:
For all 𝑥, 𝑦 ∈ 𝐵 𝑥𝑦 = 𝑥 ∧ 𝑦, 𝑥 + 𝑦 = (𝑥 ∧ 𝑦´) ∨ (𝑥´ ∧ 𝑦).
Then (𝐵; ⋅, +) is a Boolean ring.

Proof: Clearly(𝐵; ⋅) is a semigroup with an identity, namely the top element 1 of 𝐵. Moreover,
for every 𝑥 ∈ 𝐵 we have 𝑥 2 = 𝑥 ∙ 𝑥 = 𝑥 ∧ 𝑥 = 𝑥, so every element is idempotent. Now given
𝑥, 𝑦, 𝑧 ∈ 𝐵 it is easy to verify that;
(𝑥 + 𝑦) + 𝑧 = (𝑥 ∧ 𝑦´ ∧ 𝑧´) ∨ (𝑥´ ∧ 𝑦 ∧ 𝑧´) ∨ (𝑥´ ∧ 𝑦´ ∧ 𝑧) ∨ (𝑥 ∧ 𝑦 ∧ 𝑧),
which, being symmetric in 𝑥, 𝑦, 𝑧 is also equal to 𝑥 + (𝑦 + 𝑧). Since 𝑥 + 0 = (𝑥 ∧ 0´) ∨
(𝑥´ ∧ 0) = (𝑥 ∧ 1) ∨ 0 = 𝑥 and 𝑥 + 𝑥 = (𝑥 ∧ 𝑥´) ∨ (𝑥´ ∧ 𝑥) = 0 ∨ 0 = 0, we see that
(𝐵; ⋅) is an abelian group in which −𝑥 = 𝑥 for every 𝑥 ∈ 𝐵. Finally, for all 𝑥, 𝑦, 𝑧 ∈ 𝐵 we have;
𝑥𝑦 + 𝑥𝑦 = [𝑥𝑦 ∧ (𝑥𝑧)´] ∨ [(𝑥𝑦)´ ∧ 𝑥𝑧]
= [𝑥 ∧ 𝑦 ∧ (𝑥 ∧ 𝑧)´] ∨ [(𝑥 ∧ 𝑦)´ ∧ 𝑥 ∧ 𝑧]
= [𝑥 ∧ 𝑦 ∧ (𝑥´ ∨ 𝑧´)] ∨ [(𝑥´ ∨ 𝑦´) ∧ 𝑥 ∧ 𝑧]
= (𝑥 ∧ 𝑦 ∧ 𝑧´) ∨ (𝑥 ∧ 𝑦´ ∧ 𝑧)
= 𝑥 ∧ [(𝑦 ∧ 𝑧´) ∨ (𝑦´ ∧ 𝑧)]
= 𝑥(𝑦 + 𝑧).

Thus (𝐵; ⋅, +) is a Boolean ring.


We can also proceed in opposite direction. Given a Boolean ring (𝐵; ∙, +) we can equip it with
the structure of a Boolean algebra. For this purpose, we first note that in such a ring we have
𝑥 + 𝑦 = (𝑥 + 𝑦)2 = 𝑥 2 + 𝑥𝑦 + 𝑦𝑥 + 𝑦 2 = 𝑥 + 𝑥𝑦 + 𝑦𝑥 + 𝑦,
Whence 𝑥𝑦 + 𝑦𝑥 = 0 and so −𝑥𝑦 = 𝑦𝑥. Taking 𝑦 = 𝑥, we obtain −𝑥 2 = 𝑥 2 , that is, −𝑥 = 𝑥,
so that 𝑥 + 𝑥 = 0. Thus a Boolean ring is of characteristic 2. Now since 𝑥 = −𝑥 for every 𝑥
we have 𝑥𝑦 = −𝑥𝑦 = 𝑦𝑥, whence we see that a Boolean algebra is commutative.

Theorem 4.3.2: Let (𝐵; ⋅, +) be a Boolean ring. For all 𝑥, 𝑦, 𝑧 ∈ 𝐵 define


𝑥 ∧ 𝑦 = 𝑥𝑦, 𝑥 ∨ 𝑦 = 𝑥 + 𝑦 + 𝑥𝑦, 𝑥´ = 1 + 𝑥.

Then (𝐵; ∧,∨, ´) is a Boolean algebra.

Proof: It is clear from above that (𝐵; ∧) is an abelian semigroup. Also,


(𝑥 ∨ 𝑦) ∨ 𝑧 = (𝑥 ∨ 𝑦) + 𝑧 + (𝑥 ∨ 𝑦)𝑧 = 𝑥 + 𝑦 + 𝑥𝑦 + 𝑧 + 𝑥𝑧 + 𝑦𝑧 + 𝑥𝑦𝑧,
the symmetry of which shows that (𝐵; ∨) is also a semigroup, again abelian. Since now
𝑥 ∧ (𝑥 ∨ 𝑦) = 𝑥(𝑥 + 𝑦 + 𝑥𝑦) = 𝑥 + 𝑥𝑦 + 𝑥𝑦 = 𝑥 + 0 = 𝑥and 𝑥 ∨ (𝑥 ∧ 𝑦) =
𝑥 + 𝑥𝑦 + 𝑥 2 𝑦 = 𝑥 + 𝑥𝑦 + 𝑥𝑦 = 𝑥 + 0 = 𝑥, it follows that (𝐵; ∧,∨) is a lattice. This
lattice is distributive, since:
𝑥 ∧ (𝑦 ∨ 𝑧) = 𝑥(𝑦 + 𝑧 + 𝑦𝑧)
= 𝑥𝑦 + 𝑥𝑧 + 𝑥𝑦𝑧
= 𝑥𝑦 + 𝑥𝑧 + 𝑥𝑦𝑥𝑧
= 𝑥𝑦 ∨ 𝑥𝑧,
= (𝑥 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧).
Now the order in this lattice is given by;
𝑥 ≤ 𝑦 if and only if 𝑥 = 𝑥 ∧ 𝑦 = 𝑥𝑦.
Hence the lattice is bounded with top element 1 and bottom element 0. Finally
𝑥 ∨ 𝑥´ = 𝑥 𝑥´ + 𝑥𝑥´ = 𝑥 + 1 + 𝑥 + 𝑥(1 + 𝑥) 1 and 𝑥 ∧ 𝑥´ = 𝑥𝑥´ = 𝑥(1 + 𝑥) = 𝑥 +
𝑥 = 0, and so 𝑥´ is the complement of 𝑥. Thus (𝐵; ∧,∨, ´) is a Boolean algebra.

4.4 Applications of Boolean Algebras

Propositional Calculus

We now turn our attention to the applications of Boolean algebra to two-valued logic and in
particular to the calculus of propositions. Historically, lattice theory had its beginnings in the
investigations of Boole into the formation of logic.

Definition 4.4.1: By a proposition we mean a statement which in some clearly defined sense
is either true (𝑇) or false (𝐹). Thus of the two propositions
Grass is green,
Fish grows on trees,
the first is 𝑇 and the second is 𝐹. With the help of words ‘and’ (∧), ‘or’ (∨), ‘not’ (∼)
compound propositions such as
Grass is not green,
Grass is green and fish grow on trees,
can be constructed from simpler ones and truth values 𝑇 or 𝐹 of compound propositions may
be calculated from those of simpler ones of which they are composed by means of the logical
matrices defined below.
Before defining logical matrices, we discuss the notation as follows;
Notation: By the symbol ‘∼’ we denote not or the negation of any statement which we have
denoted before by symbol ′ (complementation), so we use the symbol ′ throughout instead of
‘∼’.

Definition 4.4.2: Logical matrices are to be regarded a statements of the axioms upon which
the propositional calculus is based. They are as follows:

∧ 𝑇 𝐹 ∨ 𝑇 𝐹

𝑇 𝑇 𝐹 𝑇 𝑇 𝑇

𝐹 𝐹 𝐹 𝐹 𝑇 𝐹

Thus ‘grass is green and fish grow on trees’ is 𝐹 because 𝐹 appears in row 𝑇 and
column 𝐹 of the 𝑇 𝐹 matrix for ∧.

It is usual to denote propositions by 𝑝, 𝑞, … and propositions compounded from


these by 𝐹 𝑇
𝑝 ∧ 𝑞, 𝑝 ∨ 𝑞, 𝑝′, 𝑝 ∧ 𝑞′, 𝑝 ∧ (𝑞 ∨ 𝑟),
and so on. By way of clarification it should be stated that ∨ denotes the inclusive ‘or’ so that
𝑝 ∨ 𝑞 means ‘𝑝 or 𝑞 or both’. The exclusive ‘or’ corresponds to the + of Boolean rings. We
could write 𝑝 + 𝑞 to mean ‘𝑝 or 𝑞 but not both’.

Definition 4.4.3: A proposition which cannot be constructed from simpler propositions


exclusively with the help of ∧, ∨, ′ may be called an elementary proposition.

In general, the truth value of compound proposition can be determined from the logical
matrices and from the truth values of the elementary propositions composing it. There are
however certain compound propositions whose truth value can be determined without a
knowledge of the truth values of the elementary propositions. For instance
Shakespeare wrote Hamlet or Shakespeare did not write Hamlet
is 𝑇 whether or not Shakespeare wrote Hamlet. In the above compound proposition we can
replace the elementary proposition ‘Shakespeare wrote Hamlet’ by any other proposition 𝑝
without altering its truth value. Thus

𝑝 ∨ 𝑝′is 𝑇 for all 𝑝.

Similarly

𝑝 ∧ 𝑝′is 𝐹 for all 𝑝.

These results can be calculated from the logical matrices alone. Such computations are
conveniently set out in the form of truth tables. In such a table all possible combinations of
truth values for the elementary propositions involved are tabulated to the left of the double line.
The columns to the right of the double line are then computed in succession from the logical
matrices. The truth tables for 𝑝 ∨ 𝑝′ and for 𝑝 ∧ 𝑝′ may be set down together as follows.

𝑝 𝑝′ 𝑝 ∨ 𝑝′ 𝑝 ∧ 𝑝′

𝑇 𝐹 𝑇 𝐹

𝐹 𝑇 𝑇 𝐹

Definition 4.4.4: A proposition 𝑞, such as 𝑝 ∨ 𝑝′, is said to be formally true and is called a
tautology if every proposition 𝑞 ∗ obtained from 𝑞 by replacing its elementary propositions by
arbitrary propositions is 𝑇. Correspondingly 𝑞 is said to be formally false and is called an
absurdity or a contradiction if every 𝑞 ∗ is 𝐹. We observe that this notation permits us to write

(𝑝 ∧ 𝑞)∗ = 𝑝∗ ∧ 𝑞 ∗ , (𝑝 ∨ 𝑞)∗ = 𝑝∗ ∨ 𝑞 ∗ , (𝑝′ )∗ = (𝑝∗ )′ .


Notation: We introduce the symbol ⟷ by defining 𝑝 ⟷ 𝑞 to be an abbrevatio for the
proposition (𝑝 ∧ 𝑞) ∨ (𝑝′ ∧ 𝑞 ′ ). The following computation

𝑝 𝑞 𝑝∧𝑞 𝑝′ 𝑞′ (𝑝′ ∧ 𝑞 ′ ) 𝑝⟷𝑞

𝑇 𝑇 𝑇 𝐹 𝐹 𝐹 𝑇

𝑇 𝐹 𝐹 𝐹 𝑇 𝐹 𝐹

𝐹 𝑇 𝐹 𝑇 𝐹 𝐹 𝐹
𝐹 𝐹 𝐹 𝑇 𝑇 𝑇 𝑇

shows that ⟷ has the following logical matrix

⟷ 𝑇 𝐹

𝑇 𝑇 𝐹

𝐹 𝐹 𝑇

We see that 𝑝 ⟷ 𝑞 is 𝑇 if and only if 𝑝 and 𝑞 have equal truth values whatever the truth values
of the elementary propositions composing 𝑝 and 𝑞 may be.

Definition 4.4.5: Possibly if the proposition𝑝 ⟷ 𝑞 is formally true, then in that case we call
𝑝, 𝑞equivalent propositions and write 𝑝 ≡ 𝑞.

The above definition of ≡ is helpful for us in the following result;

Proposition 4.4.6: If 𝑝, 𝑞 and 𝑟 are different propositions then:


(i) 𝑝 ≡ 𝑝,
(ii) if 𝑝 ≡ 𝑞 then 𝑞 ≡ 𝑝,
(iii) if 𝑝 ≡ 𝑞 and 𝑞 ≡ 𝑟then 𝑝 ≡ 𝑟,
(iv) if 𝑝 ≡ 𝑞 then 𝑝′ ≡ 𝑞′,
(v) if 𝑝 ≡ 𝑞then 𝑝 ∧ 𝑟 ≡ 𝑞 ∧ 𝑟,
(vi) if 𝑝 ≡ 𝑞then 𝑝 ∨ 𝑟 ≡ 𝑞 ∨ 𝑟,
(vii) 𝑝 ∧ 𝑞 ≡ 𝑞 ∧ 𝑝,
(viii) 𝑝 ∨ 𝑞 ≡ 𝑞 ∨ 𝑝.

Proof: In each case an indirect proof can be constructed. We exhibit the details for (iii) and rest
will follow by the similar argument. Suppose that 𝑝 ≢ 𝑟. Then 𝑝∗ ⟷ 𝑟 ∗ is 𝐹 for some choice
of 𝑝∗ and 𝑟 ∗ and so for this choice of𝑝∗ and𝑟 ∗ have different truth values. If we suppose 𝑝∗ is
𝑇 and 𝑟 ∗ is 𝐹, then 𝑝 ≡ 𝑞 states that 𝑝∗ ⟷ 𝑞 ∗ is 𝑇 for every 𝑞 ∗ and consequently each 𝑞 ∗ like
𝑝∗ is 𝑇. Further 𝑞 ≡ 𝑟 states that 𝑞 ∗ ⟷ 𝑟 ∗ is 𝑇from which we see that 𝑟 ∗ like 𝑞 ∗ is 𝑇 in
contradiction to the supposition that 𝑟 ∗ is 𝐹. Since a similar contradiction arises if we suppose
that 𝑝∗ is 𝐹 and 𝑟 ∗ is 𝑇. Thus validity of (iii) has been demonstrated.

If 𝑝1 ≡ 𝑝2 and 𝑞1 ≡ 𝑞2 , then from (v) and (vii) we have

𝑝1 ∧ 𝑞1 ≡ 𝑝2 ∧ 𝑞1 ≡ 𝑞1 ∧ 𝑝2 ≡ 𝑞2 ∧ 𝑝2 ≡ 𝑝2 ∧ 𝑞2 .

Using (iii) we obtain


(ix) if 𝑝1 ≡ 𝑝2 and 𝑞1 ≡ 𝑞2 then 𝑝1 ∧ 𝑞1 ≡ 𝑝2 ∧ 𝑞2 .

Similarly, from (iii), (vi) and (vii) we have


(x) if 𝑝1 ≡ 𝑝2 and 𝑞1 ≡ 𝑞2 then 𝑝1 ∨ 𝑞1 ≡ 𝑝2 ∨ 𝑞2 .

Remark 4.4.7: The (i), (ii), (iii) of above result show that ≡ is an equivalence relation which
we might well have denoted by =. Our object however has been to elucidate the meaning of
this kind of equality and for this purpose we think the notation ≡ more suggestive.
It is in this sense that the postulates for a distributive lattices are satisfied by interpreting ∨
and ∧ as union and intersection. For instance, the distributive law takes the form
𝑝 ∧ (𝑞 ∨ 𝑟) ≡ (𝑝 ∧ 𝑞) ∨ (𝑝 ∧ 𝑟)
and its validity can be demonstrated by showing that each side has the same truth value
whatever the truth values of 𝑝, 𝑞 and 𝑟 may be. This is done in the following truth table

𝑝 𝑞 𝑟 𝑞∨𝑟 𝑝 ∧ (𝑞 ∨ 𝑟) 𝑝∧𝑞 𝑝∧𝑟 (𝑝 ∧ 𝑞) ∨ (𝑝 ∧ 𝑟)

𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇

𝑇 𝑇 𝐹 𝑇 𝑇 𝑇 𝐹 𝑇

𝑇 𝐹 𝑇 𝑇 𝑇 𝐹 𝑇 𝑇

𝑇 𝐹 𝐹 𝐹 𝐹 𝐹 𝐹 𝐹

𝐹 𝑇 𝑇 𝑇 𝐹 𝐹 𝐹 𝐹

𝐹 𝑇 𝐹 𝑇 𝐹 𝐹 𝐹 𝐹

𝐹 𝐹 𝑇 𝑇 𝐹 𝐹 𝐹 𝐹

𝐹 𝐹 𝐹 𝐹 𝐹 𝐹 𝐹 𝐹

In clarification of the meanings of 𝑝 ⟷ 𝑞 and 𝑝 ≡ 𝑞 we point out that there are different
categories of propositions. We can consider elementary propositions as being in the lowest
category. The statement 𝑝 ⟷ 𝑞 is in higher category since it is a proposition about propositions
namely that 𝑝, 𝑞 have the same truth value. 𝑝 ≡ 𝑞 is in a higher category still, since it is a
proposition about a proposition about propositions namely that 𝑝 ⟷ 𝑞 is formally true.

Remarks 4.4.8: 1. The equivalence relation mentioned earlier separates all propositions into
equivalence classes and we may denote by 𝑝̅ the class to which the proposition 𝑝 belongs. That
is, the propositions 𝑝 and 𝑞 belong to the same class if and only if 𝑝 ≡ 𝑞. Expressing this in
another way, 𝑝̅ = 𝑞̅ if and only if 𝑝 ≡ 𝑞.
2. The results (iv), (ix) and (x) show that the operations ´, ∧and ∨ are stable in relation to the
equivalence relation and that the equivalence relation is indeed a congruence relation which
enables us to define unambiguously the following operations on the set of equivalence classes:
𝑝̅ = 𝑞̅ if and only if 𝑝 ≡ 𝑞´,
𝑝̅ = 𝑞̅ ∩ 𝑟̅ if and only if 𝑝 ≡ 𝑞 ∧ 𝑟,
𝑝̅ = 𝑞̅ ∪ 𝑟̅ if and only if 𝑝 ≡ 𝑞 ∨ 𝑟.
3. If, for example (ix) where not true, then for some 𝑝1 , 𝑞1 , 𝑝2 , 𝑞2 we would have 𝑝1 ≡ 𝑞1 , 𝑝2 ≡
𝑞2 , 𝑝1 ∧ 𝑝2 ≢ 𝑞1 ∧ 𝑞2 with the consequence that 𝑝 ̅̅̅1 = ̅̅̅,
𝑞1 ̅̅̅ 𝑞2 ̅̅̅̅̅̅̅̅̅
𝑝2 = ̅̅̅, ̅̅̅̅̅̅̅̅̅
𝑝1 ∧ 𝑝2 ≠ 𝑞 1 ∧ 𝑞2 and we
could not define 𝑝 𝑝2 = ̅̅̅̅̅̅̅̅̅
̅̅̅1 ∩ ̅̅̅ 𝑝1 ∧ 𝑝2 without involving ambiguity.
4. We observe that the statement 𝑝 ∨ 𝑝´ ⟷ 𝑞 ∨ 𝑞´ is always 𝑇 since each side has the same
truth value 𝑇 for all 𝑝, 𝑞. Thus 𝑝 ∨ 𝑝´ ≡ 𝑞 ∨ 𝑞´ and we may denote the class to which 𝑝 ∨ 𝑝´
belongs by 𝐼. Thus

𝐼 = ̅̅̅̅̅̅̅̅ ̅ = 𝑝̅ ∪ 𝑝̅ ´.
𝑝 ∨ 𝑝´ = 𝑝̅ ∪ 𝑝´
In the same way 𝑝 ∧ 𝑝´ ≡ 𝑞 ∧ 𝑞´, since each side is always 𝐹 and we can write
𝑂=𝑝 ̅̅̅̅̅̅̅̅ ̅ = 𝑝̅ ∩ 𝑝̅ ´.
∧ 𝑝´ = 𝑝̅ ∩ 𝑝´
In fact 𝐼 is the class of all tautologies and 𝑂 is the class of all contradictions.

Theorem 4.4.10: The classes 𝑝̅, 𝑞̅ , … , 𝑂, 𝐼 of propositions form a Boolean algebra in relation
to the operations ∩, ∪ and ´ defined above.

Proof: We need not give a detailed proof for each postulate. We have already seen in (vii) that
𝑝 ∧ 𝑞 ≡ 𝑞 ∧ 𝑝 so we obtain commutative laws from

𝑝̅ ∩ 𝑞̅ = ̅̅̅̅̅̅̅
𝑝 ∧ 𝑞 = ̅̅̅̅̅̅̅
𝑞 ∧ 𝑝 = 𝑞̅ ∩ 𝑝̅ .
The other postulates for a distributive lattice are proved in a similar manner. For instance we
show by constructing a truth table that 𝑝 ≡ 𝑝 ∧ (𝑝 ∨ 𝑞) from which we get
𝑝̅ = ̅̅̅̅̅̅̅̅̅̅̅̅̅̅
𝑝 ∧ (𝑝 ∨ 𝑞) = 𝑝̅ ∩ (𝑝 ̅̅̅̅̅̅̅
∨ 𝑞 ) = 𝑝̅ ∩ (𝑝̅ ∪ 𝑞̅ )
which is absorption law. Further,
𝑝̅ ∪ 𝐼 = 𝑝̅ ∪ (𝑝̅ ∪ 𝑝̅´) = (𝑝̅ ∪ 𝑝̅) ∪ 𝑝̅´ = 𝑝̅ ∪ 𝑝̅´ = 𝐼,
and dually, 𝑝̅ ∩ 𝑂 = 𝑂. These formulae imply that 𝐼 is the top element and that 𝑂 is the bottom
element. The relations 𝑝̅ ∪ 𝑝̅ ´ = 𝐼, 𝑝̅ ∩ 𝑝̅ ´ = 𝑂 now demonstrate that the lattice is
complemented since each class 𝑝̅ has a complement 𝑝̅ ´. Thus the lattice is Boolean algebra.
Hence the result follows.

Switching Circuits
By a switching circuit we mean a piece of electrical apparatus between the terminals of which
may be one or more switches of different sorts. These switches may be hand operated, or may
be operated by a circuit itself, or by other circuits. Since we are only concerned with whether
or not a current flow in a circuit when the potential difference is applied between two of the
terminals, we do not take into account the magnitude of the current nor the magnitude of the
component resistances. At any instant the given switch 𝑎 is supposed to be either open (𝑎 = 0)
or closed (𝑎 = 1). By means of electrical relays it is possible to arrange that a number of other
switches are open when 𝑎 is open and are closed when 𝑎 is closed. We shall denote each of the
by 𝑎 so that 𝑎 really denotes a class of switches which are either simultaneously open or
simultaneously closed. Again another set of switches 𝑎′ can be operated by relays so that each
switch 𝑎′ is open when 𝑎′ when 𝑎 is closed and is closed when 𝑎 is open. In the accompanying
diagrams the lines indicate conductors while the lettered gaps in the conductors denote
switches. The boxes containing letters denote relays which may be used to operate other
switches. Fig. 1 denotes a circuit containing a single switch 𝑎 and a relay. A current flows in
this circuit only when 𝑎 = 1 and when it does so, it operates the relay which may be used to
operate other switches 𝑎 and also

to operate switches 𝑎′. The circuit of fig. 2 has two switches 𝑎 and 𝑏 in series and will be
denoted by 𝑎 ∩ 𝑏.

Since this circuit is closed if and only if 𝑎 and 𝑏 are both closed,

𝑎 ∩ 𝑏 = 1 ⟺ 𝑎 = 1 ∧ 𝑏 = 1,
{ (1)
𝑎 ∩ 𝑏 = 0 ⟺ 𝑎 = 0 ∨ 𝑏 = 0.
When the relay of this circuit operates a switch 𝑐 such that 𝑐 = 1 when 𝑎 ∩ 𝑏 = 1 and 𝑐 = 0
when 𝑎 ∩ 𝑏 = 0, it is natural to write 𝑐 = 𝑎 ∩ 𝑏. In effect, this means that not only can a single
letter denote a class of switches but a single letter or single formula can denote a class of
equivalent circuits all of which are open simultaneously or all closed simultaneously.

In a similar manner a circuit containing two switches 𝑎 and 𝑏 in parallel will be denoted by
𝑎 ∪ 𝑏 (fig. 3) and it is easy to see

𝑎 ∪ 𝑏 = 1 ⟺ 𝑎 = 1 ∨ 𝑏 = 1,
{ (2)
𝑎 ∪ 𝑏 = 0 ⟺ 𝑎 = 0 ∧ 𝑏 = 0.

As our notation suggests, the component circuits, or rather the class of equivalent circuits are
the elements of Boolean algebra. The verification of the postulates of a distributive lattice is
accomplished in exactly the same manner as was done in the propositional calculus by means
of truth tables. For instance, the two circuits in fig. 4 and fig. 5 are either simultaneously open
or
simultaneously closed, as is demonstrated by the accompanying truth table.

𝑎 𝑏 𝑐 𝑎∪𝑏 (𝑎 ∪ 𝑏) ∪ 𝑐 𝑏∪𝑐 𝑎 ∪ (𝑏 ∪ 𝑐)

0 0 0 0 0 0 0

0 0 1 0 1 1 1

0 1 0 1 1 1 1

0 1 1 1 1 1 1

1 0 0 1 1 0 1

1 0 1 1 1 1 1

1 1 0 1 1 1 1

1 1 1 1 1 1 1

Thus the circuit (𝑎 ∪ 𝑏) ∪ 𝑐 is equivalent to the circuit 𝑎 ∪ (𝑏 ∪ 𝑐). We adopt the convention
that 𝑎 = 1 denoted that the switch or circuit 𝑎 was closed but we may in fact denote the short
circuit by 1 (fig. 6). Thus we may interpret 𝑎 = 1 to mean that 𝑎 is temporarily equivalent to a
short circuit. Similarly we interpret 𝑎 = 0𝑎 is temporarily equivalent to an open circuit (fig.
6), which is labelled 0.
Since it is easily verified that 𝑎 ∪ 1 = 1, 𝑎 ∩ 0 = 0, it is clear that 1 is the top element and 0 is
the bottom element. An examination of the circuits of fig. 7 reveals that 𝑎 ∪ 𝑎′ = 1, 𝑎 ∩ 𝑎′ =
0; from which it is clear that each class 𝑎 has complement 𝑎′ . So the lattice is a Boolean algebra.

In the circuit of figs. 2 and 3 the switches 𝑎 and 𝑏 might be operated manually since they may
be operated independently. The circuit in fig.7 denoted by 𝑎 ∪ 𝑎′ = 1 reveals a different
situation for the operation of 𝑎′ is determined by that of 𝑎 and the one cannot be manipulated
independently of the other. This relationship is expressed by the formula 𝑎 ∪ 𝑎′ = 1.

We now mention some other circuits in which the relationship between the switches is
expressed by an equation in the Boolean algebra,

Example 4.4.11: Consider a circuit with two switches 𝑎, 𝑏 related by the equation

𝑎∪𝑏 =𝑎

or by one of the equivalent formulae 𝑎 ≥ 𝑏, 𝑎 ∩ 𝑏 = 𝑏. Since 1 ≥ 𝑎 ≥ 𝑏, it follows that 𝑏 = 1


implies 𝑎 = 1. Thus 𝑏 cannot be closed until 𝑎 is closed and whenever 𝑎 is open 𝑏 must be
open. We need not concern ourselves here with the mechanical construction of such a circuit
which can be achieved in various ways but it is a plan that such device would have practical
value. Indeed the idea can be extended to a sequential system of circuits 𝑎, 𝑏, 𝑐, … such that
𝑎 ≥ 𝑏 ≥ 𝑐 … of which the lost can be closed only when 𝑎, 𝑏, 𝑐, … have been closed in
alphabetical order.

Example 4.4.12: Another circuit of special interest contains three switches 𝑎, 𝑏, 𝑐 satisfying

𝑏 = 𝑎 ∩ (𝑏 ∪ 𝑐)

Since this relation implies 𝑎 ≥ 𝑏, this circuit is the modification of the previous one. Assume
initially that 𝑎 = 1; then the closing of 𝑐 ensures that 𝑐 = 1, 𝑏 ∪ 𝑐 = 1, 𝑏 = 𝑎 ∩ (𝑏 ∪ 𝑐) = 1, 𝑏
closes. However, 𝑏 must open immediately when open 𝑎. This is known as a lock-in circuit.
We can suppose that 𝑎 is a break switch which is normally held closed by a spring and that 𝑐
is a make switch normally held open by a spring. The switch 𝑏 is operated by a relay. To close
𝑏 we have to only press 𝑐 momentarily, but when this is done 𝑏 opens and stays open until 𝑐 is
pressed again. This circuit is illustrated in fig. 8.

The two principal objects in applying Boolean algebra to switching problem is first to design
a circuit with a prescribed function and secondly to simplify a circuit without altering its
function.
As an illustration of the first type of problem we consider the construction of a binary adder
which yields the sum of three digits 𝑎, 𝑏, 𝑐 in the binary scale. One of these digits, say 𝑐, will
be a “carry in” from a previous column. Since 𝑎, 𝑏, 𝑐 are each 0 or 1 their sum must be one of
the four integers 0,1,2,3 which in the binary scale take the forms 00, 01, 10, 11. If we
denote this sum in the binary scale by 𝑥𝑦, then 𝑥 is the “carry out” digit which is inserted
in the next column. The summation to be performed and the required value of 𝑥, 𝑦 are as
follows:

Thus employing the formulae (1) and (2) we get;

𝑦 = 1 if and only if (𝑎 = 0 ∧ 𝑏 = 0 ∧ 𝑐 = 1) ∨ (𝑎 = 0 ∧ 𝑏 = 1 ∧ 𝑐 = 0) ∨ (𝑎 = 1 ∧ 𝑏 =
0∧ 𝑐 = 0) ∨ (𝑎 = 1 ∧ 𝑏 = 1 ∧ 𝑐 = 1);

if and only if (𝑎′ = 1 ∧ 𝑏 ′ = 1 ∧ 𝑐 = 1) ∨ (𝑎′ = 1 ∧ 𝑏 = 1 ∧ 𝑐 ′ = 1) ∨ (𝑎 = 1 ∧



𝑏 = 1 ∧ 𝑐 ′ = 1) ∨ (𝑎 = 1 ∧ 𝑏 = 1 ∧ 𝑐 = 1);

if and only if (𝑎′ ∩ 𝑏 ′ ∩ 𝑐 = 1) ∨ (𝑎′ ∩ 𝑏 ∩ 𝑐 ′ = 1) ∨ (𝑎 ∩ 𝑏 ′ ∩ 𝑐 =


1)∨ (a∩b∩c=1);

if and only if (𝑎′ ∩ 𝑏 ′ ∩ 𝑐) ∪ (𝑎′ ∩ 𝑏 ∩ 𝑐 ′ ) ∪ (𝑎 ∩ 𝑏 ′ ∩ 𝑐) ∪ (𝑎 ∩ 𝑏 ∩ 𝑐) = 1.

Consequently,

𝑦 = (𝑎′ ∩ 𝑏 ′ ∩ 𝑐) ∪ (𝑎′ ∩ 𝑏 ∩ 𝑐 ′ ) ∪ (𝑎 ∩ 𝑏 ′ ∩ 𝑐) ∪ (𝑎 ∩ 𝑏 ∩ 𝑐).


Similarly 𝑥 = (𝑎′ ∩ 𝑏 ∩ 𝑐) ∪ (𝑎 ∩ 𝑏′ ∩ 𝑐) ∪ (𝑎 ∩ 𝑏 ∩ 𝑐′) ∪ (𝑎 ∩ 𝑏 ∩ 𝑐).

This simplifies to, 𝑥 = (𝑎 ∩ 𝑏) ∪ (𝑏 ∩ 𝑐) ∪ (𝑐 ∩ 𝑎).

The following network may therefore be used (fig. 9)

The problem of simplifying a circuit is largely one of reducing a given Boolean polynomial to
an equivalent expression which is simpler in form in the sense that fewer letters are required in
writing it down. Thus the first of two expressions for 𝑥 above requires 12 letters or switches
while the second only requires 6. A further alternative employing only 5 switches would be
given by the formula;

𝑥 = [𝑎 ∩ (𝑏 ∪ 𝑐)] ∪ (𝑏 ∩ 𝑐).

There are of course also certain technical considerations which must be taken into account in
determining which of two circuits should be regarded as the simpler. We consider some aspects
of the problem in next section.

Bridge circuits

Bridge circuits

The circuits discussed in the previous section have all been of the series-parallel type. If
bilateral elements are used which conduct current in both directions it may be possible to
simplify a given circuit by using by using bridge circuit such as that of fig.10. in this circuit we
suppose that when 𝑐 is closed current can flow in either direction through this switch. The
bridge circuit illustrated employs only five switches though a series-parallel circuit for 𝑥 would
require at least eight corresponding to
𝑥 = [𝑎 ∩ (𝑒 ∪ (𝑐 ∩ 𝑑))] ∪ [𝑏 ∩ (𝑑 ∪ (𝑐 ∩ 𝑒))]

or ten corresponding to the 𝑥 in the figure. The appropriate formula for such a bridge circuit
can be obtained by enumerating the possible paths of the current and by taking the union of the
Boolean functions for the different paths.

Definition 4.4.13: A device for the construction of non-series-parallel circuits is the disjunctive
tree which employs transfer switches in which the operation of 𝑎 and 𝑎′ is effected by a single
spring.

Consider the case of three variables (three classes of switches) 𝑎, 𝑏, 𝑐 and suppose that
𝑓(𝑎, 𝑏, 𝑐) is a Boolean function for the class of circuits, we find that 𝑓(𝑎, 𝑏, 𝑐) can be written
as

[𝑎 ∩ 𝑏 ∩ 𝑓(1,1, 𝑐)] ∪ [𝑎 ∩ 𝑏 ′ ∩ 𝑓(1,0, 𝑐)] ∪ [𝑎′ ∩ 𝑏 ∩ 𝑓(0,1, 𝑐)] ∪ [𝑎′ ∩ 𝑏 ′ ∩


𝑓(0,0, 𝑐)].

Now 𝑓(1,1, 𝑐), 𝑓(1,0, 𝑐), 𝑓(0,1, 𝑐), 𝑓(0,0, 𝑐) all belong to the four element lattice generated by
𝑐 which is composed of the elements 0, 𝑐, 𝑐 ′ , 1. Therefore the circuit required is realised by
marrying the disjunctive tree for 𝑎 and 𝑏 (fig.11 a) with the network of fig. 11b.

There is of course no need to include the open circuit 0 in fig.11b except for diagrammatic
purposes. Whatever the nature of 𝑓(𝑎, 𝑏, 𝑐) at most eight switches or four transfer switches are
required. By way of illustration we take,

𝑓(𝑎, 𝑏, 𝑐) = (𝑎′ ∩ 𝑏 ′ ∩ 𝑐) ∪ (𝑎′ ∩ 𝑏 ∩ 𝑐 ′ ) ∪ (𝑎 ∩ 𝑏 ′ ∩ 𝑐 ′ ) ∪ (𝑎 ∩ 𝑏 ∩ 𝑐)

which is the formula for the digit 𝑦 of the binary adder investigated in the previous section. We
can easily verify that

𝑓(1,1, 𝑐) = 𝑐, 𝑓(1,0, 𝑐) = 𝑐 ′ , 𝑓(0,1, 𝑐) = 𝑐 ′ , 𝑓(0,0, 𝑐) = 𝑐.

To obtain a required circuit (fig.12) it is only necessary to connect the circuits 𝑎 ∩ 𝑏 and 𝑎′ ∩
𝑏′ with 𝑐 and to connect 𝑎 ∩ 𝑏′ and 𝑎′ ∩ 𝑏 with 𝑐 ′ . The short circuit 1 in fig.11b is not required.

The circuit of fig.12 is clearly more economical the series parallel circuit of fig.9 for the same
Boolean function. The method described may be applied to any number of variables but as the
number rises the complexities of the computation rapidly increases.

You might also like