Professional Documents
Culture Documents
Lattices and Boolean Algebras
Lattices and Boolean Algebras
Lattices and Boolean Algebras
Prepared by
Dr Aftab Hussain Shah
Assistant Professor
Department of Mathematics
Central University of Kashmir
MARCH 1, 2108
CENTRAL UNIVERSITY OF KASHMIR
Srinagar
Contents
The present chapter is aimed at to provide introductory definitions, examples and results on
ordered sets which shall be used in the subsequent chapters. This chapter consists of five
sections. Section 1.1 is on ordered sets and their examples. In section 1.2 we discuss Hasse
diagrams, a special type of diagrams used to represent ordered sets. Section 1.3 and 1.4 deals
with results and examples on order preserving and residuated mappings between the ordered
sets. The chapter ends with some important results and examples on isomorphism of ordered
sets.
In this section we will go through the definition of partial order relation on a set, shortly read
as “order” and discuss partial ordered sets (ordered sets) in detail with the help of various
examples. The section ends with some of the important results on ordered sets.
Definition 1.1.1: A binary relation on a non-empty set 𝐸 is a subset 𝑅 of the Cartesian product
set𝐸 × 𝐸 = {(𝑥, 𝑦) | 𝑥, 𝑦 ∈ 𝐸}. For any (𝑥, 𝑦) ∈ 𝐸 × 𝐸, (𝑥, 𝑦) ∈ 𝑅 means that 𝑥 is related to 𝑦
under 𝑅 and we denote it by 𝑥𝑅𝑦.
Definition 1.1.4: By an ordered set we shall mean a set 𝐸 on which there is defined an order ≤
and we denote it by (𝐸; ≤).
Other common terminology for an order is a partial order and for an ordered set is a partially
ordered set or a poset.
Example 1.1.6: On the set ℙ(𝐸) of all subsets of a non-empty set 𝐸, the relation ⊆ of set
inclusion defined by 𝐴 ≤ 𝐵 if and only if 𝐴 ⊆ B is an order.
Example 1.1.7: On the set ℕ of natural numbers the relation | of divisibility, defined by 𝑚 ≤
𝑛 if and only if 𝑚|𝑛, is an order.
Solution: To show that ℕ with respect to | forms an order set we must show that it satisfies
reflexivity, antisymmetry and transitivity.
Reflexivity: For any 𝑚 ∈ ℕ, we have 𝑚 = 1 ∙ 𝑚, so 𝑚|𝑚, thus 𝑚 ≤ 𝑚 and hence ≤ is
reflexive
Antisymmetry: For any 𝑚, 𝑛 ∈ ℕ suppose 𝑚 ≤ 𝑛 and 𝑛 ≤ 𝑚 if and only if 𝑚|𝑛 and 𝑛|𝑚 if
and only if 𝑚 = 𝑛. Thus ≤ is antisymmetric.
Transitivity: For 𝑚, 𝑛, 𝑝 ∈ ℕ suppose 𝑚 ≤ 𝑛 and 𝑛 ≤ 𝑝 if and only if 𝑚|𝑛 and 𝑛|𝑝 if and only
if 𝑚|𝑝 i.e, 𝑚 ≤ 𝑛. Thus ≤ is transitive. So| is an order on ℕ.
Example 1.1.8: If (𝑃; ≤ ) is an ordered set and 𝑄 is a subset of 𝑃 then the relation≤𝑄 defined
on 𝑄 by;
𝑥 ≤𝑄 𝑦 if and only if 𝑥 ≤ 𝑦 is an order on 𝑄.
Example 1.1.10: If (𝐸1 ; ≤1 ), (𝐸2 ; ≤2 ), . . . , (𝐸𝑛 ; ≤𝑛 ) are ordered sets then the Cartesian
product set×𝑛𝑖=1 𝐸 i can be given the Cartesian order ≤ defined by: for any (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ),
(𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ) ∈ ×𝑛𝑖=1 𝐸𝑖 ;
(𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) ≤ (𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ) if and only if 𝑥𝑖 ≤𝑖 𝑦𝑖 for all 𝑖 = 1,2, … , 𝑛.
Solution: Reflexivity: Let (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) ∈×𝑛𝑖=1 𝐸𝑖 where 𝑥𝑖 ∈ 𝐸𝑖 for all 𝑖 = 1,2, … , 𝑛. Since
(𝐸1 ; ≤1 ) , (𝐸2 ; ≤2 ), . . . , (𝐸𝑛 ; ≤𝑛 ) are ordered sets then by reflexivity of each
(𝐸𝑖 ; ≤𝑖 ) 𝑥𝑖 ≤𝑖 𝑥𝑖 for all 𝑖 = 1,2, … , 𝑛 if and only if (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) ≤ (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) (by
definition of ≤). This Shows that ≤ is reflexive.
Antisymmetry: Let (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ), (𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ) ∈ ×𝑛𝑖=1 𝐸𝑖 where 𝑥𝑖 ∈ 𝐸𝑖 and 𝑦𝑖 ∈ 𝐸𝑖 for
all 𝑖 = 1,2, … , 𝑛. Suppose (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) ≤ (𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ) and (𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ) ≤
(𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) if and only if 𝑥𝑖 ≤𝑖 𝑦𝑖 and 𝑦𝑖 ≤𝑖 𝑥𝑖 for all 𝑖 = 1,2, … , 𝑛 if and only if 𝑥𝑖 = 𝑦𝑖
for all 𝑖 = 1,2, … , 𝑛 (by antisymmetry of each (𝐸𝑖 ; ≤𝑖) ) if and only if (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) =
(𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ). Which showsthat ≤ is antisymmetric.
Transitivity: For any (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ), (𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ) and (𝑧1 , 𝑧2 , . . . , 𝑧𝑛 ) ∈×𝑛𝑖=1 𝐸𝑖 if
(𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) ≤ (𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ) and (𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ) ≤ (𝑧1 , 𝑧2 , . . . , 𝑧𝑛 ) if and only if
𝑥𝑖 ≤𝑖 𝑦𝑖 and 𝑦𝑖 ≤𝑖 𝑧𝑖 for all 𝑖 = 1,2, … , 𝑛 if and only if 𝑥𝑖 ≤𝑖 𝑧𝑖 for all 𝑖 = 1,2, … , 𝑛 (by
transitivity of each (𝐸𝑖 ; ≤𝑖 )) if and only if (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) ≤ (𝑧1 , 𝑧2 , . . . , 𝑧𝑛 ), proving that ≤ is
transitive.
Definition 1.1.11: The order defined in above is called the Cartesian order.
Example 1.1.12: Let 𝐸 and 𝐹 be ordered sets, then the set 𝑀𝑎𝑝(𝐸, 𝐹) of all mappings 𝑓: 𝐸 →
𝐹 can be ordered by defining 𝑓 ≤ 𝑔 if and only if 𝑓(𝑥) ≤ 𝑔(𝑥) for all 𝑥 ∈ 𝐸.
Definition 1.1.13: We say that elements 𝑥, 𝑦 of an ordered set (𝐸; ≤) are comparable if either
𝑥 ≤ 𝑦 or 𝑦 ≤ 𝑥. We denote this by writing 𝑥 ∥ 𝑦.
Definition 1.1.14: If all elements of an ordered set ( 𝐸, ≤) are comparable then we say that 𝐸
forms a chain or that ≤ is a total order.
Example 1.1.16: The sets ℕ, ℤ, ℚ, ℝ of natural numbers, integers, rationals and real numbers
form chains under their usual orders of ≤.
Example 1.1.17: In Example 1.1.6, the singleton subsets of 𝑃(𝐸) form an antichain under the
inherited inclusion order.
Example 1.1.18: Let (𝑃1 ; ≤1 ) and (𝑃2 ; ≤2 ) be ordered sets we prove that the relation ≤ defined
on 𝑃1 × 𝑃2 by:
𝑥1 <1 𝑥2 ,
(𝑥1 , 𝑦1 ) ≤ (𝑥2 , 𝑦2 ) if and only if {
or 𝑥1 = 𝑥2 and 𝑦1 ≤2 𝑦2
is an order called the lexicographic order on 𝑃1 × 𝑃2 . We also show also that ≤ is total order
if and only if ≤1 and ≤2 are total orders.
Solution: Reflexivity: Suppose that (𝑥, 𝑦) ∈ 𝑃1 × 𝑃2 ,then𝑥 ∈ 𝑃1 and 𝑦 ∈ 𝑃2 , since 𝑃1 and 𝑃2
are ordered sets we have;
𝑥 <1 𝑥;
𝑥 ≤1 𝑥 and 𝑦 ≤2 𝑦 if and only if {
or 𝑥 = 𝑥 and 𝑦 ≤2 𝑦,
if and only if (𝑥, 𝑦) ≤ (𝑥, 𝑦).
Let (𝑥1 , 𝑦1 ) ≤ (𝑥2 , 𝑦2 ) and (𝑥2 , 𝑦2 ) ≤ (𝑥1 , 𝑦1 ). We show that (𝑥1 , 𝑦1 ) = (𝑥2 , 𝑦2 ).
𝑥1 <1 𝑥2 ;
Now (𝑥1 , 𝑦1 ) ≤ (𝑥2 , 𝑦2 ) if and only if {
or 𝑥1 = 𝑥2 and 𝑦1 ≤2 𝑦2 ,
𝑥2 <1 𝑥1 ;
and (𝑥2 , 𝑦2 ) ≤ (𝑥1 , 𝑦1 ) if and only if {
or 𝑥2 = 𝑥1 and 𝑦2 ≤2 𝑦1 ,
𝑥, 𝑦 ∈ 𝑃1 and 𝑥 ≤1 𝑦;
𝑥 ≤ 𝑦 if and only if {
or 𝑥, 𝑦 ∈ 𝑃2 and 𝑥 ≤2 𝑦,
𝑥, 𝑦 ∈ 𝑃1 and 𝑥 ≤1 𝑦;
{
or 𝑥, 𝑦 ∈ 𝑃2 and 𝑥 ≤2 𝑦,
and
𝑦, 𝑥 ∈ 𝑃1 and 𝑦 ≤1 𝑥;
{
or 𝑦, 𝑥 ∈ 𝑃2 and 𝑦 ≤2 𝑥,
The following cases can arise:
Case 1: 𝑥, 𝑦 ∈ 𝑃1 , 𝑥 ≤1 𝑦 and 𝑥, 𝑦 ∈ 𝑃1 , 𝑦 ≤1 𝑥, since (𝑃1 , ≤1 ) is an ordered set therefore by
antisymmetry of ≤1 we have 𝑥 = 𝑦.
Case 2: 𝑥, 𝑦 ∈ 𝑃1 , 𝑥 ≤1 𝑦 and 𝑦, 𝑥 ∈ 𝑃2 , 𝑦 ≤2 𝑥, since 𝑃1 and 𝑃2 are disjoint so this case is
not possible.
Case3: 𝑥, 𝑦 ∈ 𝑃2 , 𝑥 ≤2 𝑦 and 𝑥, 𝑦 ∈ 𝑃2 , 𝑦 ≤2 𝑥, since (𝑃2 , ≤2 ) is an ordered set, therefore
by antisymmetry of ≤2 , 𝑥 = 𝑦.
Case 4: 𝑥, 𝑦 ∈ 𝑃2 and 𝑥 ≤2 𝑦 and 𝑦, 𝑥 ∈ 𝑃1 𝑦 ≤1 𝑥, which is again not possible as 𝑃1 and 𝑃2
are disjoint.
Hence, we conclude that ≤ is antisymmetric.
Transitivity: Let 𝑥, 𝑦, 𝑧 ∈ 𝑃1 ∪ 𝑃2 and suppose 𝑥 ≤ 𝑦 and 𝑦 ≤ 𝑧. This implies:
𝑥, 𝑦 ∈ 𝑃1 and 𝑥 ≤1 𝑦;
{
or 𝑥, 𝑦 ∈ 𝑃2 and 𝑥 ≤2 𝑦,
and
𝑦, 𝑧 ∈ 𝑃1 and 𝑦 ≤1 𝑧;
{
or 𝑦, 𝑧 ∈ 𝑃2 and 𝑦 ≤2 𝑧,
Example 1.1.20: Let 𝑃1 and 𝑃2 be disjoint sets, if ≤1 is an order on 𝑃1 and ≤2 is an order on 𝑃2.
We show that the following defines an order on 𝑃1 ∪ 𝑃2 :
𝑥, 𝑦 ∈ 𝑃1 and 𝑥 ≤1 𝑦,
𝑥 ≤ 𝑦 if and only if {or 𝑥, 𝑦 ∈ 𝑃2 and 𝑥 ≤2 𝑦,
or 𝑥 ∈ 𝑃1 and 𝑦 ∈ 𝑃2 .
The resulting ordered set is called the vertical sum or the linear sum of 𝑃1 and 𝑃2 and is denoted
by 𝑃1 ⨁𝑃2 .
Next result shows that the notion of order can be carried from any relation to its dual.
Theorem 1.1.21: If 𝑅 is an order on 𝐸 then so is its dual.
Definition 1.1.22: If(𝐸; ≤ ) is an ordered set, then by the top element or maximum element or
greatest element of 𝐸, we mean an element 𝑥 ∈ 𝐸 such that 𝑦 ≤ 𝑥 for every 𝑦 ∈ 𝐸.
Note: A top element when it exists is unique, in fact if 𝑥 and𝑦 are both top elements of 𝐸 then
on the one hand 𝑦 ≤ 𝑥 and on the other hand 𝑥 ≤ 𝑦, whence by the antisymmetry of ≤ we
have 𝑥 = 𝑦.
Note: A bottom element when it exists is unique, in fact if 𝑥 and 𝑧 are both bottom elements
of 𝐸, then on the one hand we have 𝑥 ≥ 𝑧 and on the other hand, we have 𝑧 ≥ 𝑥, whence by
the antisymmetry of ≥ we have 𝑥 = 𝑦.
Definition 1.1.24: An ordered set that has both a top element and bottom element is said to be
bounded.
Note: We shall use the notation 𝑥 < 𝑦 to mean 𝑥 ≤ 𝑦 and 𝑥 ≠ 𝑦. Note that relation < thus
defined is transitive but is not an order, since it fails to be reflexive. In other words, strict
inclusion is characterized by the antisymmetric and transitive laws.
Definition 1.1.26: In an ordered set (𝐸; ≤ ) we say that 𝑥 is covered by 𝑦 (or that 𝑦 covers) if
𝑥 < 𝑦 and there is no 𝑎 ∈ 𝐸 such that 𝑥 < 𝑎 < 𝑦. We denote this by 𝑥 ≺ 𝑦 The set of pairs
(𝑥, 𝑦) such that 𝑦 covers 𝑥 is called the covering relation of (𝐸; ≤ ).
Example 1.1.27: The covering relation of the partial ordering {(𝑎, 𝑏): ‘𝑎 divides 𝑏’} on
{1, 2, 3, 4, 6, 12} are the following sets:
(1,2), (2,4), (2,6), (3,6), (4,12).
This is the important section to deal with partial ordered sets. In this section we are going to
see how an order on any set can be represented by diagrams. They are called Hasse diagrams
and are defined as below.
Definition 1.2.1: A diagram representing an ordered set is called a Hasse diagram if:
(1) elements are represented by points and for any two elements 𝑥, 𝑦 if 𝑥 ≤ 𝑦, then they are
joined by increasing line segment.
Example 1.2.2: Let 𝐸 = {1,2,3,4,6,12} be the set of positive integral divisors of 12. Then the
Hasse diagram of (𝐸; ≤) where ≤ is divisibility order on 𝐸 is as follows.
Example 1.2.3:Let 𝑋 = {𝑎, 𝑏, 𝑐} and 𝐸 = ℙ(𝑋), then the Hasse diagram of (𝐸; ≤) where ≤ is
inclusion order on ℙ(𝑋) can be drawn as below:
The Hasse diagram of 𝐸 𝑑 = (𝐸; ≤) where ≤ is ⊇ is obtained by turning the above diagram
upside down and is as follows:
Example 1.2.4: We draw the Hasse diagram on sets of 3,4 and 5 elements by taking different
orders on them.
(ii) If we take set of positive integral divisors of 4 and ordered it by divisibility then we obtain
Hasse diagram as;
Set of 4 elements:
(i) {𝑎, 𝑏, 𝑐, 𝑑} under usual order;
Example 1.2.5: The Hasse diagram for the set of positive integral divisors of 210 when ordered
by divisibility is given below:
Solution: The set of positive divisors of 210 is given as below;
𝑆 = {1,2,3,5,7,10,14,15,21,30,35,42,70,105,210},
Now, 𝑃2 × 𝑃1 = {(𝑥, 𝑎), (𝑥, 𝑏), (𝑥, 𝑐), (𝑦, 𝑎), (𝑦, 𝑏), (𝑦, 𝑐)} and its Hasse diagram under
Cartesian order is given by;
From the above we conclude that both 𝑃1 × 𝑃2 and 𝑃2 × 𝑃1 have same Hasse diagram, except
for the order of the components of vertices.
Definition 1.3.1: If (𝐴, ≤1 ) and (𝐵, ≤2 ) are ordered sets, then we say that a mapping 𝑓: 𝐴 → 𝐵
is isotone (or monotone or order preserving) if;
for all 𝑥, 𝑦 ∈ 𝐴 𝑥 ≤1 𝑦 implies 𝑓(𝑥) ≤2 𝑓(𝑦)
and is antitone (or inverting or order reversing) if;
for all 𝑥, 𝑦 ∈ 𝐴 𝑥 ≤1 𝑦 implies 𝑓(𝑥) ≥2 𝑓(𝑦).
Example 1.3.2: If 𝐸 is a non-empty set and 𝐴 𝐸 then 𝑓𝐴 : 𝑃(𝐸) → 𝑃(𝐸) given by𝑓𝐴 (𝑋) =
𝐴 ∩ 𝑋 is isotone and If 𝑋´ is the complement of 𝑋 in 𝐸, then the assignment 𝑋 ↦ 𝑋´ defines
an anti-tone mapping on 𝑃(𝐸).
Solution: Let 𝑋, 𝑌 ∈ 𝑃(𝐸) such that 𝑋 𝑌; we show that 𝑓𝐴 (𝑋) 𝑓𝐴 (𝑌) i.e., 𝐴 ∩ 𝑋 𝐴 ∩ 𝑌.
Let 𝑥 ∈ 𝐴 ∩ 𝑋 then 𝑥 ∈ 𝐴 and 𝑥 ∈ 𝑋; therefore 𝑥 ∈ 𝐴 and 𝑥 ∈ 𝑌 (𝑋 𝑌), this implies 𝑥 ∈
𝐴 ∩ 𝑌. Thus 𝑓𝐴 (𝑋) 𝑓𝐴 (𝑌) showing that 𝑓𝐴 is isotone.
Now we show that 𝑓(𝑋) = 𝑋´ is antitone.
Let 𝑋, 𝑌 ∈ 𝑃(𝐸) such that 𝑋 𝑌, we have to show that 𝑌´ 𝑋´.
Let 𝑥 ∈ 𝑌´ then 𝑥 ∉ 𝑌 therefore 𝑥 ∉ 𝑋 (𝑋 𝑌) this implies 𝑥 ∈ 𝑋´. Thus 𝑌´ 𝑋´, showing
that 𝑓 is antitone.
Example 1.3.3: Given 𝑓: 𝐸 → 𝐹consider the induced direct image map 𝑓 → ∶ 𝑃(𝐸) → 𝑃(𝐹)
defined for every 𝑋 𝐸 by 𝑓 → (𝑋) = {𝑓(𝑥) | 𝑥 ∈ 𝑋} andthe induced inverse image map
𝑓 ← : 𝑃(𝐹) → 𝑃(𝐸) defined for every 𝑌 𝐹 by 𝑓 ← (𝑌) = {𝑥 ∈ 𝐸 | 𝑓 (𝑥) ∈ 𝑌 }. Each of these
mappings is isotone.
We shall now give a natural interpretation of isotone mappings. For this purpose, we require
the following notations.
Definition 1.3.4:(i) By a down-set of an ordered set 𝐸 we shall mean a subset 𝐷 of 𝐸, with the
property that if 𝑥 ∈ 𝐷 and 𝑦 ∈ 𝐸 is such that 𝑦 ≤ 𝑥, then 𝑦 ∈ 𝐷. We include the empty subset
of 𝐸 as down set.By a principaldown-set we shall mean a down set of the form;
𝑥 ↓ = {𝑦 ∈ 𝐸 | 𝑦 ≤ 𝑥}. i.e, the down-set of 𝐸 generated by 𝑥.
(ii) By an up-set of an ordered set 𝐸 we shall mean a subset 𝑈 of 𝐸 with the property that if
𝑥 ∈ 𝑈 and 𝑦 ∈ 𝐸 is such that 𝑦 ≥ 𝑥, then 𝑦 ∈ 𝑈 and the principal up-set is an up-set of the
form;
𝑥 ↑ = {𝑦 ∈ 𝐸 | 𝑦 ≥ 𝑥}. i.e, the up-set of 𝐸 generated by 𝑥.
Example 1.3.5: In the chain 𝑄 + of positive rational numbers the set 𝐷 = {𝑞 ∈ 𝑄 + | 𝑞 2 ≤ 2} is
a down-set that is not principal.
Solution: Clearly 𝐷 ⊆ 𝑄 + , let 𝑥 ∈ 𝐷, this gives 𝑥 2 ≤ 2 and let 𝑦 ∈ 𝑄 + such that 𝑦 ≤ 𝑥, this
implies 𝑦 2 ≤ 𝑥 2 ≤ 2 so 𝑦 2 ≤ 2 which gives 𝑦 ∈ 𝐷.Thus 𝐷 is a down-set. Now suppose to
the contrary that 𝐷 is a principle down-set. Then by definition we have for some 𝑥0 ∈ 𝑄 +
𝐷 = {𝑞 ∈ 𝑄 + : 𝑞 2 ≤ 2} = 𝑥0↓
= {𝑦 ∈ 𝑄 + : 𝑦 ≤ 𝑥0 }
= {𝑦 ∈ 𝑄 + : 𝑦 2 ≤ 𝑥02 }.
Since 𝑥02 ≰ 2 for all 𝑥0 ∈ 𝑄. This implies 𝐷 ≠ 𝑥0↓ for any 𝑥0 ∈ 𝑄, which is a contradiction.
Thus 𝐷 is not principal down-set.
The next result shows that the intersection and union of two down-sets is again a down-set.
Proposition 1.3.6: If 𝐴 and 𝐵 are down sets of an ordered set 𝐸 then so are 𝐴 ∩ 𝐵 and 𝐴 ∪ 𝐵.
Proof: Let 𝐴 and 𝐵 are down-sets of an ordered set 𝐸 and let 𝑥 ∈ 𝐴 ∩ 𝐵, 𝑦 ∈ 𝐸 with 𝑦 ≤ 𝑥.
Then 𝑥 ∈ 𝐴 and 𝑥 ∈ 𝐵, 𝑦 ∈ 𝐸 with 𝑦 ≤ 𝑥. This implies 𝑥 ∈ 𝐴 and 𝑦 ∈ 𝐸 with 𝑦 ≤ 𝑥 and 𝑥 ∈
𝐵, 𝑦 ∈ 𝐸 with 𝑦 ≤ 𝑥. Since 𝐴 is a down-set this implies 𝑦 ∈ 𝐴. Also since 𝐵 is a down-set so
𝑦 ∈ 𝐵, thus 𝑦 ∈ 𝐴 ∩ 𝐵. Which shows that 𝐴 ∩ 𝐵 is a down-set.
Now we show 𝐴 ∪ 𝐵 is a down-set. Let 𝑥 ∈ 𝐴 ∪ 𝐵, 𝑦 ∈ 𝐸 with 𝑦 ≤ 𝑥. Then 𝑥 ∈ 𝐴 or 𝑥 ∈ 𝐵,
𝑦 ∈ 𝐸 with 𝑦 ≤ 𝑥. This implies 𝑥 ∈ 𝐴, 𝑦 ∈ 𝐸 with 𝑦 ≤ 𝑥 or 𝑥 ∈ 𝐵, 𝑦 ∈ 𝐸 with 𝑦 ≤ 𝑥. Since 𝐴
is a down-set this implies 𝑦 ∈ 𝐴 or also since 𝐵 is a down-set so 𝑦 ∈ 𝐵, thus 𝑦 ∈ 𝐴 ∪ 𝐵 showing
that 𝐴 ∪ 𝐵 is a down-set.
Note: The above result is not true in general for principal down-sets. For example, in
Theorem 1.3.7: If 𝐸 and 𝐹 are ordered sets and if 𝑓 ∶ 𝐸 → 𝐹 is any mapping then the following
statement are equivalent:
(1)𝑓 is isotone;
(2) The inverse image of every principal down-set of 𝐹 is a down-set of 𝐸;
(3) The inverse image of every principal up-set of 𝐹 is an up-set of 𝐸.
Proof: (1) ⇒ (2): Suppose that 𝑓 ∶ 𝐸 → 𝐹 is isotone and let 𝑦 ∈ 𝐹 and let 𝐴 = 𝑓 ← (𝑦 ↓ ) then;
𝐴 = {𝑥 ∈ 𝐸 𝑓(𝑥) ∈ 𝑦 ↓ }. (i)
Now since 𝑓(𝑥) ∈ 𝑦 ↓ we have𝑓(𝑥) ≤ 𝑦. If 𝐴 is empty, then 𝐴 is clearly a down-set, suppose
that 𝐴 is non-empty and let 𝑥 ∈ 𝐴 then for every 𝑧 ∈ 𝐸 with 𝑧 ≤ 𝑥 we have;
𝑓(𝑧) ≤ 𝑓(𝑥) ≤ 𝑦 (since 𝑓 is isotone).
Thus 𝑓(𝑧) ≤ 𝑦 (by transitivity).This implies f (z) ∈ 𝑦 ↓ for all 𝑧 ∈ 𝐸. Thus we have 𝑧 ∈ 𝐴
(by (i)). So, by definition 𝐴 is a down-set of 𝐸.
(2) ⇒ (1): For any 𝑥 ∈ 𝐸 we have 𝑓 (𝑥) ∈ (𝑓 (𝑥)) ↓ , therefore 𝑥 ∈ 𝑓 ← (𝑓 (𝑥) ↓ ). By (2) this
is a down-set of 𝐸, so if 𝑦 ∈ 𝐸 such that 𝑦 ≤ 𝑥 then we have 𝑦 ∈ 𝑓 ← (𝑓 (𝑥)↓ ); which implies 𝑓
(y) ∈ (𝑓 (𝑥)) ↓, so by definition 𝑓 (𝑦) ≤ 𝑓 (𝑥). Thus 𝑓is isotone.
(1) ⇔ (3): This follows from above by the principal of duality.
In view of the above natural result we now investigate under what conditions the inverse image
of the principal down-set is also a principal down-set. The outcome will be the type of mapping
that will play an important role in the sequel.
Theorem 1.4.1: If 𝐸 and 𝐹 are ordered sets, then the following conditions concerning𝑓 ∶ 𝐸 →
𝐹 are equivalent.
(1) The inverse image under f of every principal down-set of 𝐹 is a principal down-set of
𝐸.
(2)𝑓is isotone and there is an isotone mapping 𝑔: 𝐹 → 𝐸 such that 𝑔 ∘ 𝑓 ≥ 𝑖𝑑𝐸 and 𝑓 ∘ 𝑔 ≤
𝑖𝑑𝐹 .
Proof: (1) ⇒ (2): If (1) holds, then by the Theorem(1.3.7)𝑓 is isotone, which means that for
all 𝑦 ∈ 𝐹 there exists 𝑥 ∈ 𝐸 such that 𝑓 ← (𝑦 ↓ ) = 𝑥 ↓ , that is {𝑥 ∈ 𝐸 | 𝑓(𝑥) ∈ 𝑦 ↓ } = 𝑥 ↓ ; which
implies {𝑥 ∈ 𝐸 | 𝑓(𝑥) ≤ 𝑦} = 𝑥 ↓ .
Claim: For every 𝑦 ∈ 𝐸, this element 𝑥 is unique, if possible suppose 𝑥0 ∈ 𝐸 such that 𝑓 ← (𝑦 ↓ )
= 𝑥0↓ , which implies { 𝑥0 ∈ 𝐸 | 𝑓 (𝑥0 ) ≤ 𝑦 } = 𝑥0↓ . Since 𝑓(𝑥) ≤ 𝑦 for all 𝑦 ∈ 𝐸 and 𝑓(𝑥0 ) ≤
𝑦 for all 𝑦 ∈ 𝐹 we have 𝑥 ≤ 𝑓 ← (𝑦) and 𝑥0 ≤ 𝑓 ← (𝑦). Which is only possible if 𝑥 = 𝑥0 . So
we can define a mapping 𝑔: 𝐹 → 𝐸 by setting 𝑔(𝑦) = 𝑥.
Claim: 𝑔: 𝐹 → 𝐸 defined as above is isotone:
For this let 𝑦1 , 𝑦2 ∈ 𝐹 with 𝑦1 ≤ 𝑦2 . We will show that 𝑔(𝑦1 ) ≤ 𝑔(𝑦2 ) that is 𝑥1 ≤ 𝑥2 .
Let 𝑥 ∈ 𝑦1↓ then by definition 𝑥 ≤ 𝑦1 ≤ 𝑦2 (as 𝑦1 ≤ 𝑦2 ) or 𝑥 ≤ 𝑦2 , which implies 𝑥 ∈ 𝑦2↓ so
𝑦1↓ ⊆ 𝑦2↓ . Since 𝑓 ← is isotone, we have 𝑓 ← (𝑦1↓ ) ⊆ 𝑓 ← (𝑦2↓ ) which implies 𝑥1↓ ⊆ 𝑥2↓ .
Now𝑥1 ∈ 𝑥1↓ ⊆ 𝑥2↓ , then 𝑥1 ∈ 𝑥2↓ , so 𝑥1 ≤ 𝑥2 (by definition of down-set), therefore 𝑔(𝑦1 ) ≤
𝑔(𝑦2 ). From this mapping 𝑔 we have 𝑔(𝑦) ∈ 𝑔(𝑦)↓ = 𝑥 ↓ = 𝑓 ← (𝑦 ↓ ) this gives 𝑔(𝑦) ∈
𝑓 ← (𝑦 ↓ ), which implies 𝑓(𝑔(𝑦)) ∈ 𝑦 ↓ . So𝑓(𝑔(𝑦)) ≤ 𝑦 for all 𝑦 ∈ 𝐸. Thus 𝑔 ∘ 𝑓 ≥ 𝑖𝑑𝐸 . Thus
(1) ⇒ (2) holds.
(2) ⇒ (1): Suppose that (2) holds, then for all 𝑓(𝑥), 𝑦 ∈ 𝐹 with 𝑓(𝑥) ≤ 𝑦 we have;
𝑥 ≤ 𝑔{𝑓(𝑥)} ≤ 𝑔(𝑦) (since 𝑔 is isotone).
Also for all 𝑥, 𝑔(𝑦) ∈ 𝐸, we have (since 𝑓 is isotone) 𝑥 ≤ 𝑔(𝑦) = 𝑓(𝑥) ≤ 𝑓(𝑔(𝑦)) ≤ 𝑦.
It follows from the above observations that 𝑓(𝑥) ≤ 𝑦 if and only if 𝑥 ≤ 𝑔(𝑦); which implies
𝑓(𝑥) ∈ 𝑦 ↓ if and only if 𝑥 ∈ 𝑔(y) ↓ or 𝑥 ∈ 𝑓 ← (𝑦 ↓ ) if and only if 𝑥 ∈ 𝑔(𝑦)↓ ; which gives 𝑓 ←
(𝑦 ↓ ) = (𝑔(y)) ↓. Thus (2) ⇒ (1) holds.
Example 1.4.3: If 𝑓: 𝐸 → 𝐹 then the direct image map 𝑓 → ∶ 𝑃(𝐸) → 𝑃(𝐹) is residuated with
residual 𝑓 + = 𝑓 ← ∶ 𝑃(𝐹) → 𝑃(𝐸).
Solution: We are given 𝑓: 𝐸 → 𝐹 and we know that 𝑓 → ∶ 𝑃(𝐸) → 𝑃(𝐹) is defined for every
𝑋 ⊆ 𝐸 by
𝑓 → (𝑋) = {𝑓(𝑥)| 𝑥 ∈ 𝑋}. (i)
Also 𝑓 + = 𝑓 ← ∶ 𝑃(𝐹) → 𝑃(𝐸) is defined for every 𝑌 ⊆ 𝐹 by
𝑓 ← (𝑌) = {𝑥 ∈ 𝐸 | 𝑓(𝑥) ∈ 𝑌}. (ii)
We have to show that 𝑓 + = 𝑓 ← is the residual of 𝑓 → or in other words 𝑓 → is residuated.
Now for any 𝑋 ∈ 𝑃(𝐸) we have (𝑓 ← ∘ 𝑓 → )(𝑋) = 𝑓 ← (𝑓 → (𝑋))
= 𝑓 ← ({𝑓(𝑥)| 𝑥 ∈ 𝑋}) (by (i))
⊇ 𝑋.
So, from above we get 𝑓 ← ∘ 𝑓 → ≥ 𝑖𝑑𝑃(𝐸) . Similarly, we can show that 𝑓 → ∘ 𝑓 ← ≤ 𝑖𝑑𝑃(𝐹) ,
therefore by definition 𝑓 ← is residuated with residual 𝑓 + = 𝑓 ← . This establishes the claim and
hence the result holds.
Example 1.4.4: If 𝐸 is any set and 𝐴 ⊆ 𝐸 then 𝜆𝐴 : 𝑃(𝐸) → 𝑃(𝐸) defined by 𝜆𝐴 (𝑋) = 𝐴 ∩ 𝑋 is
residuated with residual 𝜆𝐴+ given by 𝜆𝐴+ (𝑌) = 𝑌 ∪ 𝐴´.
Corollary 1.4.7: For every ordered set 𝐸 and the set Res 𝑬 of residuated mappings 𝑓: 𝐸 → 𝐸
forms a semigroup, as does the set Res+ 𝑬 of residual mappings 𝑓 + : 𝐸 → 𝐸.
We now consider the notion of an isomorphism of ordered sets. Clearly whatever are the
properties of the bijection 𝑓: 𝐸 ⟶ 𝐹we require all these properties in order to define an
isomorphism of ordered sets. We certainly also want 𝑓 −1 : 𝐹 ⟶ 𝐸to be an isomorphism. We
note that simply choosing 𝑓 to be an isotone bijection is not enough e.g. if we consider the
ordered sets with the following Hasse diagrams:
Definition 1.5.1: By an order isomorphism from an ordered set 𝐸 to an ordered set 𝐹, we mean
an isotone bijection 𝑓: 𝐸 → 𝐹 whose inverse is also isotone.
Note: From the above results we can see that the notion of an order isomorphism is equivalent
to that of a bijection 𝑓 that is residuated whose inverse 𝑓 −1 : 𝐹 → 𝐸 is also isotone (residual of
𝑓). If there is an order isomorphism, 𝑓: 𝐸 → 𝐹 then we say that 𝐸, 𝐹 are (order) isomorphic and
we write it as 𝐸 ≃ 𝐹.
Theorem 1.5.2: Ordered sets𝐸 and 𝐹 are isomorphic if and only if there is surjective mapping
𝑓: 𝐸 → 𝐹 such that
𝑥 ≤ 𝑦 if and only if 𝑓(𝑥) ≤ 𝑓(𝑦).
Proposition 1.5.4: If 𝐸 and 𝐹 are ordered sets, then under the Cartesian order 𝐸 × 𝐹 ≃ 𝐹 × 𝐸.
Proof: Define 𝑓 ∶ 𝐸 × 𝐹 → 𝐹 × 𝐸 by
𝑓(𝑎, 𝑏) = (𝑏, 𝑎).
𝒇 is surjective:
Let (𝑏, 𝑎) ∈ 𝐹 × 𝐸 then𝑏 ∈ 𝐹 and 𝑎 ∈ 𝐸, so there exists(𝑎, 𝑏) ∈ 𝐸 × 𝐹 such that;
𝑓(𝑎, 𝑏) = (𝑏, 𝑎).
Now let (𝑎, 𝑏), (𝑎′ , 𝑏 ′ ) ∈ 𝐸 × 𝐹 such that; (𝑎, 𝑏) ≤ (𝑎′ , 𝑏 ′ )
if and only if 𝑎 ≤ 𝑎′ in 𝐸 and 𝑏 ≤ 𝑏 ′ in 𝐹
if and only if 𝑏 ≤ 𝑏 ′ in 𝐹 and 𝑎 ≤ 𝑎′ in 𝐸
if and only if (𝑏, 𝑎) ≤ (𝑏 ′ , 𝑎′ ) in 𝐹 × 𝐸
if and only if 𝑓(𝑎, 𝑏) ≤ 𝑓(𝑎′ , 𝑏 ′ ).
Therefore, by (Theorem 1.5.2) 𝐸 × 𝐹 ≃ 𝐹 × 𝐸.
Example 1.5.7: Let 2 denote the two element chain 0 < 1. Prove that the mapping
𝑓: ℙ(1,2, … , 𝑛) → 𝟐𝑛 given by;
𝑓(𝑥) = (𝑥1 , … , 𝑥𝑛 ),
1 if 𝑖 ∈ 𝑋;
𝑥𝑖 = {
0 otherwise,
where 𝑋 ⊆ ℙ(1,2, … , 𝑛) is an order isomorphism.
Proof: 𝒇 is isotone: Let 𝑋, 𝑌 ∈ 𝑃(1,2, … , 𝑛) and let 𝑓(𝑋) = (𝑥1 , … , 𝑥𝑛 ), 𝑓(𝑌) = (𝑦1 , … , 𝑦𝑛 )
then
𝑋 ⊆ 𝑌 if and only if (for all 𝑖) 𝑖 ∈ 𝑋 implies 𝑖 ∈ 𝑌
if and only if (for all 𝑖) 𝑥𝑖 = 1 implies𝑦𝑖 = 1
if and only if (for all 𝑖) 𝑥𝑖 ≤ 𝑦𝑖
if and only if 𝑓(𝑋) ≤ 𝑓(𝑌) in2𝑛 .
To show 𝑓 is onto take 𝑥 = (𝑥1 , 𝑥2 … , 𝑥𝑛 ) ∈ 2𝑛 , then;
𝑥 = 𝑓(𝑋) where 𝑋 = {𝑖 |𝑥𝑖 = 1}.
So 𝑓 is onto. Therefore 𝑓 is an ordered isomorphism
Chapter 2
Introduction to Lattices
Many important properties of an ordered set 𝑃 are expressed in terms of the existence of certain
upper bounds or lower bounds of 𝑃. Two of the most important classes of ordered sets defined
in this way are lattices and complete lattices. In this chapter we present the basic theory of such
ordered sets and also consider lattice as algebraic structures. We also discuss special type of
lattice called down set lattice and the mappings which preserve the operation of lattices. The
chapter ends with some important results and examples on complete lattices.
In this section we will go through the definition of semilattices and lattices and discuss lattice
as an algebraic structure. The section ends with some of the important results on lattices.
If 𝐸 is an ordered set and 𝑥 ∈ 𝐸 then the canonical embedding of 𝑥↓ into 𝐸, that is the restriction
to 𝑥↓ of the identity mapping on 𝐸 is clearly isotone. For if 𝑖 ∶ 𝑥 ↓ → 𝐸 defined by 𝑖𝑥 (𝑥) = 𝑥,
such that 𝑥 ≤ 𝑦 then 𝑖(𝑥) = 𝑥 ≤ 𝑦 = 𝑖𝑥 (𝑦) which implies 𝑖𝑥 (𝑥) ≤ 𝑖𝑥 (𝑦).
We shall now see when such embedding is residuated. Residuated mappings have important
consequences as for as the structure of 𝐸 is concerned.
Proof: (1) ⟺ (2) For each 𝑥 ∈ 𝐸, let 𝑖𝑥 : 𝑥 ↓ → 𝐸 be the canonical embedding. Then by the
definition of a residuated mapping (1) holds if and only if for all 𝑥, 𝑦 ∈ 𝐸 there exists 𝛼 =
max{𝑧 ∈ 𝑥 ↓ : 𝑧 = 𝑖(𝑧) ≤ 𝑦}. We claim that this is equivalent to the existence of 𝛼 ∈ 𝐸 such
that 𝑥 ↓ ∩ 𝑦 ↓ = 𝛼 ↓ . Now clearly 𝛼 ⊆ 𝑥 ↓ ⊆ 𝐸 so 𝑥 ∈ 𝐸. Let 𝑧 ∈ 𝛼 ↓ then we have 𝑧 ≤ 𝑥 and
𝑧 ≤ 𝑦 this implies 𝑧 ∈ 𝑥 ↓ and 𝑧 ∈ 𝑦 ↓ this gives 𝑧 ∈ 𝑥 ↓ ∩ 𝑦 ↓ which implies 𝛼 ↓ ⊆ 𝑥 ↓ ∩
𝑦 ↓ . Secondly we suppose that 𝑘 ∈ 𝑥 ↓ ∩ 𝑦 ↓ implies 𝑘 ∈ 𝑥 ↓ and 𝑘 ∈ 𝑦 ↓ or 𝑘 ≤ 𝑥 and 𝑘 ≤ 𝑦;
this gives 𝑘 ≤{ z ∈𝑥 ↓ : z ≤𝑦}; this implies 𝑘 ≤ 𝛼 or 𝑘 ∈𝛼 ↓ . Thus 𝑥 ↓ ∩ 𝑦 ↓ = 𝛼 ↓ , which is (2).
Definition 2.1.2: If 𝐸 satisfies either of the equivalent conditions of the above theorem then we
shall denote by 𝑥 ∧ 𝑦 the element 𝛼 such that 𝑥 ↓ ∩ 𝑦 ↓ = 𝛼 ↓ and call 𝑥 ∧ 𝑦 the meet of 𝑥 and
𝑦. In this situation we shall say that 𝐸 is a meet semilattice.
We can of course develop the duals of the above, obtaining in this way the duals, the notion
of a join semilattice which is characterized by the intersection of any two principal upsets
being a principal upsets, the element 𝛽 such that 𝑥 ↑ ∩ 𝑦 ↑ = 𝛽 ↑ being denoted by 𝑥 ∨ 𝑦 and
called the join of 𝑥 and 𝑦.
Definition 2.1.3: The minimum of a subset 𝑆 of a partially ordered set (𝐸; ≤) is an element of
𝑆 which is less than or equal to any other element of 𝑆.
Proposition 2.1.4: Ever𝑦 chain is a meet semilattice in which 𝑥 ∧ 𝑦 = 𝑚𝑖𝑛 {𝑥, 𝑦}.
proof: Let 𝐶 be any chain and let 𝑥, 𝑦 ∈ 𝐶 then either 𝑥 ≤ 𝑦 or 𝑦 ≤ 𝑥. Without loss of
generality suppose that 𝑥 ≤ 𝑦; then 𝑚𝑖𝑛 {𝑥, 𝑦} = 𝑥. Now 𝑥 ≤ 𝑦 imples 𝑥 ∈ 𝑦 ↓ . Clearly 𝑥 ∈
𝑥 ↓ so 𝑥 ∈ 𝑥 ↓ ∩ 𝑦 ↓ ; thus 𝑥 ↓ ⊆ 𝑥 ↓ ∩ 𝑦 ↓ . Also 𝑥 ↓ ∩ 𝑦 ↓ ⊆ 𝑥 ↓ . So 𝑥 ↓ ∩ 𝑦 ↓ = 𝑥 ↓ . Thus by
definition of meet 𝑥 ∧ 𝑦 = 𝑥 = 𝑚𝑖𝑛 {𝑥, 𝑦}.
The following result shows that the converse of (Propositions 2.1.6 and 2.1.7) holds, that is
every commutative, idempotent semigroup gives rise to meet semilattice and join semilattice.
Theorem 2.1.8: Every commutative idempotent semigroup can be ordered in such a way that
it forms a meet semilattice.
Proof: Suppose that 𝐸 is a commutative idempotent semigroup in which we shall denote the
law of composition by juxtaposition. Define a relation ≤ on 𝐸 by 𝑥 ≤ 𝑦 if and only if 𝑥𝑦 =
𝑥. We first show that ≤ is an order.
Reflexivity: Since 𝐸 is idempotent therefore we have 𝑥 2 = 𝑥 for every 𝑥 ∈ 𝐸 this implies
that 𝑥𝑥 = 𝑥 for all 𝑥 ∈ 𝐸; which implies 𝑥 ≤ 𝑥. So that ≤ is reflexive.
Antisymmetry: Let 𝑥 ≤ 𝑦 and 𝑦 ≤ 𝑥 for all 𝑥, 𝑦 ∈ 𝐸 then by commutativity of 𝐸 we have
𝑥 = 𝑥𝑦 = 𝑦𝑥 = 𝑦; thus 𝑥 = 𝑦. Hence ≤ is antisymmetric.
Transitivity: Let 𝑥 ≤ 𝑦 and 𝑦 ≤ 𝑧 then by definition 𝑥 = 𝑥𝑦 and 𝑦 = 𝑦𝑧; which implies
𝑥 = 𝑥𝑦 = 𝑥𝑦𝑧 = 𝑥𝑧 and therefore we get 𝑥 ≤ 𝑧, so that 𝑅 is transitive.
Now if 𝑥, 𝑦 ∈ 𝐸 then we have 𝑥𝑦 = 𝑥𝑥𝑦 = 𝑥𝑦𝑥 and so 𝑥𝑦 ≤ 𝑥. Inverting the roles of 𝑥, 𝑦
we also have 𝑦𝑥 = 𝑦𝑦𝑥 = 𝑦𝑥𝑦 implies 𝑥𝑦 ≤ 𝑦 therefore 𝑥𝑦 ∈ 𝑥 ↓ ∩ 𝑦 ↓ . We now suppose
that 𝑧 ∈ 𝑥 ↓ ∩ 𝑦 ↓ . Then 𝑧 ≤ 𝑥 and 𝑧 ≤ 𝑦, this implies 𝑧 = 𝑧𝑥 and 𝑧 = 𝑧 = 𝑧𝑦 = 𝑧𝑥𝑦;
and therefore 𝑧 ≤ 𝑥. Which implies that 𝑥𝑦 is the top element of 𝑥 ↓ ∩ 𝑦 ↓ . Thus 𝐸 is a meet
semilattice in which 𝑥 ∧ 𝑦 = 𝑥𝑦.
Theorem 2.1.9: Every commutative idempotent semigroup can be ordered in such a way that
it forms a join semilattice.
Proof: Suppose that 𝐸 is a commutative idempotent semigroup in which we denote the law of
composition by juxtaposition. Define a relation ≥ on 𝐸 by 𝑥 ≥ 𝑦 if and only if 𝑥𝑦 = 𝑦. Then
by above theorem ≥ is an order. If 𝑥, 𝑦 ∈ 𝐸, then we have 𝑥𝑦 = 𝑥𝑦𝑦 = 𝑦𝑥𝑦; so 𝑦 ≥ 𝑥𝑦. Now
inverting the roles of 𝑥 and 𝑦 we also have 𝑥 ≥ 𝑥𝑦. Therefore 𝑥𝑦 ∈ 𝑥 ↑ ∩ 𝑦 ↑ . Suppose now that
𝑧 ∈ 𝑥 ↑ ∩ 𝑦 ↑ then 𝑧 ≥ 𝑥 and 𝑧 ≥ 𝑦; implies 𝑧 = 𝑧𝑥 and 𝑧 = 𝑧𝑦. So 𝑧 = 𝑧𝑦 = 𝑧𝑥𝑦 and
therefore 𝑧 ≥ 𝑥𝑦; this implies 𝑥𝑦 = 𝑦 = 𝑠𝑢𝑝{𝑥, 𝑦}. Thus 𝐸 is a join semilattice in which 𝑥 ∨
𝑦 = 𝑥𝑦 = 𝑦.
Example 2.1.10: If 𝑃 and 𝑄 are meet semilattices then the set of isotone mappings from 𝑃 to 𝑄
forms the meet semilattice with respect to the order defined by;
𝑓 ≤ 𝑔 if and only if 𝑓(𝑥) ≤ 𝑔(𝑥).
Solution: Let 𝑀𝑎𝑝(𝑃, 𝑄) denotes set of isotone mappings from 𝑃 to 𝑄. Since 𝑀𝑎𝑝(𝑃, 𝑄) is an
ordered set therefore we prove that for every 𝑓 ∈ 𝑀𝑎𝑝(𝑃, 𝑄) the canonical embedding of 𝑓 ↓
into 𝑀𝑎𝑝(𝑃, 𝑄) is residuated. Now the canonical embedding of 𝑓 ↓ into 𝑀𝑎𝑝(𝑃, 𝑄) is
𝑖𝑓 : 𝑓 ↓ → 𝑀𝑎𝑝(𝑃, 𝑄). (1)
If 𝑔 ∈ 𝑓 ↓ then 𝑔 ≤ 𝑓 and 𝑖𝑓 (𝑔) = 𝑔 where 𝑔 ∈ 𝑀𝑎𝑝(𝑃, 𝑄). we claim that the map given in
(1) is residuated. In other words we show that inverse image of every principal down of 𝑓 ↓ is
principal down set. Let 𝑖𝑓← (𝑔↓ ) = 𝑘 ↓ where 𝑔, 𝑘 ∈ 𝑀𝑎𝑝(𝑃, 𝑄). Let 𝛼 ∈ 𝑖𝑓← (𝑔↓ ) be any function
in 𝑀𝑎𝑝(𝑃, 𝑄) then 𝑖𝑓 (𝛼) ∈ 𝑔↓ implies 𝛼 ∈ 𝑔↓ ; therefore 𝑖𝑓← (𝑔↓ ) ⊆ 𝑔↓ .
Now let 𝑞 ∈ 𝑔↓ then 𝑞 ≤ 𝑔 implies 𝑖𝑓 (𝑞) ∈ 𝑔↓ ; which implies 𝑔 ∈ 𝑖𝑓← (𝑔↓ ). Thus 𝑔 ⊆ 𝑖𝑓← (𝑔↓ ).
therefore, 𝑖𝑓← (𝑔↓ ) = 𝑔↓ . Therefore we get 𝑔↓ = 𝑘 ↓ , as required.
(i) We note here that the notation 𝐹 ↓ is used to denote the down-set generated by 𝐹, namely
{𝑥 ∈ 𝐸 | there exists 𝑎 ∈ 𝐹 with 𝑥 ≤ 𝑎} and 𝐹 ↑ to denote the upset generated by 𝐹. In
particular we have {𝑥}↓ = 𝑥 ↓ and {𝑥}↑ = 𝑥 ↑ .
(ii) Note that since 𝐹 ↓ and 𝐹 ↑ denote the set of lower bounds and upper bounds respectively,
so we conclude that these sets may be empty only if they are unbounded. But not so when 𝐸 is
bounded because, in that case it has both a top element 1 and the bottom element 0. If 𝐸 has a
top element 1 then since 1 ≥ 𝑥 for every 𝑥 ∈ 𝐸; therefore 𝐸 ↑ = {1} otherwise 𝐸↑ =∅. Similarly
if 𝐸 has a bottom element 0 then 𝐸 ↓ = {0}, otherwise 𝐸 ↓ = ∅.
(iii) Note that if 𝐹 = ∅, then since ∅ is subset of every set so every 𝑥 ∈ 𝐸 satisfies the relation
𝑦 ≤ 𝑥 for every 𝑦 ∈ 𝐹. Thus ∅↑ = 𝐸 and similarly ∅↓ = 𝐸.
Definition 2.1.13: If 𝐸 is an ordered set and 𝐹 is a subset of 𝐸 then by the infimum or greatest
lower bound of 𝐹 we mean the top element when such exists of the set 𝐹 ↓ of lower bounds of
𝐹. We denote this by 𝑖𝑛𝑓𝐸 𝐹 or simply 𝑖𝑛𝑓𝐹, if there is no confusion.
Since ∅↓ = 𝐸, we see that 𝑖𝑛𝑓𝐸 𝐹 exists if and only if 𝐸 has a top element 1, in which case
𝑖𝑛𝑓 𝐸𝐹= 1. It is immediate from what has gone before that a meet semilattice can be described
as an ordered set in which every pair of elements 𝑥, 𝑦 has a greatest lower bound; here we have
𝑖𝑛𝑓{𝑥, 𝑦} = 𝑥 ∧ 𝑦. A simple inductive argument shows that for every finite subset
{𝑥1 , 𝑥2 , . . . , 𝑥𝑛 } of a meet semilattice we have that 𝑖𝑛𝑓{𝑥1 , 𝑥2 , … , 𝑥𝑛 } exists and is 𝑥1 ∧ 𝑥2 ∧
… ∧ 𝑥𝑛 .
Definition 2.1.15: A lattice is an ordered set (𝐸, ≤) which with respect to its order is both a
meet semilattice and join semilattice.
Thus the lattice is an ordered set in which every pair of elements and hence every finite subset
has an infimum and a supremum. We often denote the lattice by (𝐸; ∧ ; ∨ ; ≤) .
Remarks 2.1.16:
(1) Let 𝐸 be an ordered set. If 𝑥, 𝑦 ∈ 𝐸 and 𝑥 ≤ 𝑦 then (𝑥, 𝑦)𝑢 = 𝑦 ↑ and (𝑥, 𝑦)𝑙 = 𝑥 ↓ (where
(𝑥, 𝑦)𝑢 denotes upper bound of {𝑥,𝑦} and (𝑥, 𝑦)𝑙 denotes lower bound of {𝑥, 𝑦}) and since the
least element o𝑓 𝑦 ↑ 𝑖𝑠 𝑦 and the greatest element of 𝑥 ↓ is 𝑥. We have 𝑥 ∨ 𝑦 = 𝑦 and 𝑥 ∧ 𝑦 = 𝑥
whenever 𝑥 ≤ 𝑦. In particular since “≤” is reflexive, we have 𝑥 ∨ 𝑥 = 𝑥 and 𝑥 ∧ 𝑥 = 𝑥.
(2) In an ordered set 𝐸, the 𝑙𝑢𝑏 of (𝑥, 𝑦) may fail to exist for two different reasons:
(a) because 𝑥 and 𝑦 have no common upper bounds,
(b) because they have no least upper bound.
For example, consider the following Hasse diagrams:
Here (𝑎, 𝑏)𝑢 = (𝑐, 𝑑) and thus 𝑎 ∨ 𝑏 does not exist as (𝑎, 𝑏)𝑢 has no least element.
(3) Let 𝐿 be a lattice then for all 𝑎, 𝑏, 𝑐, 𝑑 ∈ 𝐿,
(i) 𝑎 ≤ 𝑏 implies 𝑎 ∨ 𝑐 ≤ 𝑏 ∨ 𝑐 and 𝑎 ∧ 𝑐 ≤ 𝑏 ∧ 𝑐;
(ii) 𝑎 ≤ 𝑏 and 𝑐 ≤ 𝑑 implies 𝑎 ∨ 𝑐 ≤ 𝑏 ∨ 𝑑 and 𝑎 ∧ 𝑐 ≤ 𝑏 ∧ 𝑑.
(4) Let 𝐿 be a lattice, let 𝑎, 𝑏, 𝑐 ∈ 𝐿 and assume 𝑏 ≤ 𝑎 ≤ 𝑏 ∨ 𝑐,since𝑐 ≤ 𝑏 ∨ 𝑐 we have
(𝑏 ∨ 𝑐) ∨ 𝑐 = 𝑏 ∨ 𝑐 (by 1). Thus 𝑏 ∨ 𝑐 ≤ 𝑎 ∨ 𝑐 ≤ (𝑏 ∨ 𝑐) ∨ 𝑐 = 𝑏 ∨ 𝑐, whenever 𝑎 ∨ 𝑐 = 𝑏 ∨
𝑐.
Lemma 2.1.17: (Connecting lemma) Let 𝐿 be a lattice and 𝑎, 𝑏 ∈ 𝐿. Then the following are
equivalent:
(i) 𝑎 ≤ 𝑏;
(ii) 𝑎 ∨ 𝑏 = 𝑏;
(iii) 𝑎 ∧ 𝑏 = 𝑎.
Proof: We have already shown in above remark that (i) implies both (ii) and (iii). Now assume
(ii) holds, then 𝑏 is an upper bound for (𝑎, 𝑏) whenever 𝑏 ≥ 𝑎. Thus (i) holds. Likewise, we
can show that (iii) implies (i).
Theorem 2.1.18: A set 𝐸 can be given the structure of lattice if and only if it can be endowed
with two laws of composition (𝑥, 𝑦) ↦ 𝑥 ⋒ 𝑦 and (𝑥, 𝑦) ↦ 𝑥 ⋓ 𝑦 such that
Proof: Suppose that 𝐸 is a lattice then by definition ( 𝐸; ≤ ) is both a meet semilattice and a
join semilattice. That is, it has two laws of composition that satisfy (1) namely (𝑥, 𝑦) ↦ 𝑥 ∧
𝑦 and(𝑥, 𝑦) ↦ 𝑥 ∨ 𝑦. To show that (2) holds we have;
𝑥 ≤ sup {𝑥, 𝑦} = 𝑥 ∨ 𝑦 so by connecting lemma 𝑥 ∧ (𝑥 ∨ 𝑦) = 𝑖𝑛𝑓 { 𝑥 , 𝑥 ∨ 𝑦} = 𝑥. Also
𝑥 ≥ 𝑖𝑛𝑓 {𝑥, 𝑦} = 𝑥 ∧ 𝑦 and so by connecting lemma 𝑥 ∨ (𝑥 ∧ 𝑦) = 𝑥. Thus 𝑥 ∧ (𝑥 ∨ 𝑦) = 𝑥 ∨
(𝑥 ∧ 𝑦) = 𝑥. Which proves (2).
Suppose now that 𝐸 has two laws of compositions ⋒ and ⋓ that satisfy (1) and (2). Using (2)
we have 𝑥 ⋓ 𝑥 = 𝑥 ⋓ [𝑥 ⋒ (𝑥 ⋓ 𝑥 )]; again using (2) we have 𝑥 ⋓ [𝑥 ⋒ (𝑥 ⋓ 𝑥 )] = 𝑥.
Thus 𝑥 ⋓ 𝑥 = 𝑥 and similarly 𝑥 ⋒ 𝑥 = 𝑥 . Which shows that 𝐸 is idempotent. Thus 𝐸 is a
commutative idempotent semigroup, so by (Theorem 2.1.6 and Theorem 2.1.7) (𝐸 ; ⋒) and
( 𝐸 ; ⋓) are semilattices. In order to show that is (𝐸 ; ⋓ ,⋒) is a lattice with ⋒ 𝑎𝑠 ∧ and ⋓ as
∨, we must show that the order defined by above compositions must coincide. In other words
we must show that 𝑥 ⋒ 𝑦 = 𝑥 is equivalent to 𝑥 ⋓ 𝑦 = 𝑦. Now if 𝑥 ⋓ 𝑦 = 𝑦 then by using
absorption law we have 𝑥 = 𝑥 ⋒ (𝑥 ⋓ 𝑦 ) = 𝑥 ⋒ 𝑦; which implies 𝑥 ⋒ 𝑦 = 𝑥 and if 𝑥 ⋒
𝑦 = 𝑥 then using the absorption law we have 𝑦 = ( 𝑥 ⋒ 𝑦 ) ⋓ 𝑦 = 𝑥 ⋓ 𝑦. Thus we see that 𝐸
is a lattice in which 𝑥 ≤ 𝑦 is described equivalently by 𝑥 ⋒ 𝑦 = 𝑥 or by 𝑥 ⋓ 𝑦 = 𝑦.
Proof: To prove that every chain ( 𝑃; ≤) is a lattice, fix some 𝑎, 𝑏 ∈ 𝑃 and without loss of
generality assume that 𝑎 ≤ 𝑏. From reflexivity of " ≤ ", we have 𝑎 ≤ 𝑎; hence ‘𝑎’ is the lower
bound of the set { 𝑎 , 𝑏}. To prove that it is the greatest lower bound note that if some 𝑐 ∈ 𝑃
is another lower bound of { 𝑎, 𝑏 } then 𝑐 ≤ 𝑎 . It means 𝑎 = 𝑖𝑛𝑓 {𝑎 , 𝑏}. Now we show that
𝑏 = 𝑠𝑢𝑝 { 𝑎 , 𝑏 }. From reflexivity of “≤” we have 𝑏 ≤ 𝑏; also by our assumption 𝑎 ≤ 𝑏.
Hence 𝑏 is the upper bound of the set {𝑎 , 𝑏}. To prove that it is the least upper bound, let 𝑘 ∈
𝑃 is another upper bound of { 𝑎, 𝑏} then by definition of supremum 𝑏 ≤ 𝑘 and therefore
𝑠𝑢𝑝{𝑎, 𝑏} = 𝑏. This shows that ( 𝑃; ≤) is a lattice.
Definition 2.1.18: A lattice 𝐿 is said to be a bounded lattice if it has both an upper bound
denoted by 1 and a lower bound denoted by 0.
Solution: For any two elements 𝑆 and 𝑇 in ℙ(𝐸) we have 𝑆 ⊆ 𝑆 ∪ 𝑇 and 𝑇 ⊆ 𝑆 ∪ 𝑇. Thus 𝑆 ∪
𝑇 is the upper bound of 𝑆 and 𝑇. Now if 𝑅 is any other upper bound then 𝑆 ⊆ 𝑅 and 𝑇 ⊆ 𝑅
this gives S ∪ 𝑇 ⊆ 𝑅. So that sup{𝑆, 𝑇} = 𝑆 ∪ 𝑇. Also 𝑆 ∩ 𝑇 ⊆ 𝑆 and 𝑆 ∩ 𝑇 ⊆ 𝑇. Thus 𝑆 ∩ 𝑇
is the lower bound of 𝑆 and 𝑇. Now if 𝐿 is any other lower bound of 𝑆 and 𝑇 then 𝐿 ⊆ 𝑆 and 𝐿 ⊆
𝑇, this gives 𝐿 ⊆ 𝑆 ∩ 𝑇. Thus 𝑖𝑛𝑓{𝑆, 𝑇} = 𝑆 ∩ 𝑇.
Since ∅ ⊆ S for every 𝑆 ∈ ℙ( 𝐸 ) so ∅ is the lower bound of ℙ(𝐸). Also 𝑆 ⊆ 𝐸 for every 𝑆 ∈
ℙ (𝐸) so 𝐸 is the upper bound, therefore ℙ(𝐸) is bounded lattice.
Example 2.1.20: For every infinite set 𝐸, let ℙ𝑓(𝐸 ) be the set of finite subsets of 𝐸 then
𝑃𝑓 (𝐸 ) is a lattice with no top element.
Example 2.1.22: If 𝑉 is a vector space and 𝑆𝑢𝑏𝑉 denotes the set of subspaces of 𝑉 then
( 𝑆𝑢𝑏𝑉 ; ⊆ ) forms a lattice with 𝑖𝑛𝑓 {𝐴, 𝐵} = 𝐴 ∩ 𝐵 and 𝑠𝑢𝑝 { 𝐴 , 𝐵 } = 𝐴 + 𝐵 = {𝑎 +
𝑏 |𝑎 ∈ 𝐴 and 𝑏 ∈ 𝐵}.
Example 2.1.25: We draw the Hasse diagram of the lattice of subgroups of the alternating
group 𝒜 4.
Proof: 𝒜 4 is the alternating group on 4 letters, that is, it is the set of all even permutations.
𝒜4 ={(1), (12)(34), (13)(24), (14)(23), (123), (132), (124), (142), (134), (143),(234),(24
3)}which totals to 12 elements so by Lagrange’s Theorem the subgroups of 𝒜 4 should have
order 1,2,3,4,6 and 12. The subgroups of order 1 and order 12 are trivial.
The subgroups of order 2 are {1, (1 2), (3 4)}, {1, (1 3), (2 4)}, {1, (1 4), (2 3)}. Subgroups of
order 3 are {1, (2 3 4), (2 4 3)}, {1, (1 3 4)(1 4 3)}, {1, (1 2 4) (1 4 2)} and
{1, (1 2 3)(1 3 2)}.
Subgroup of order 4 is {1, (1 2)(3 4), (1 3)(2 4), (1 4)(2 3)}. 𝒜 4 has no subgroup of order 6.
Now let 𝑆𝑢𝑏 𝒜 4 denotes the subgroups of alternating group 𝒜 4, then;
𝑆𝑢𝑏𝒜4 =
{1, {1, (1 2), (3 4)}, {1, (1 3), (2 4)}, {1, (1 4), (2 3)}{1, (2 3 4)(2 4 3)}, {1, (1 3 4)(1 4 3)},
{1, (1 2 4)(1 4 2)}, {1, (1 2)(3 4), (1 3)(2 4), (1 4)(2 3)}, 𝒜 4}. Thus the subgroup lattice of
the alternating group 𝒜 4 is as follows:
𝒜4
Example 2.2.1: Consider the set with the following Hasse diagram as shown below
Down-set lattices will be of considerable interest to us later. For the moment we shall consider
how to compute the cardinality of 𝒪(𝐸) when the ordered set 𝐸 is finite. Upper and Lower
bounds for this are provided by the following result;
Proof: Since 𝒪(𝐸)⊆ ℙ(𝐸) so | 𝒪(𝐸)| ≤ |ℙ(E)| = 2n . Thus | 𝒪(𝐸)| ≤ 2n . We need to only
show that 𝑛 + 1 ≤ ∣ 𝒪(𝐸 ) ∣. We have two cases to consider.
Case 1: If 𝐸 is a chain.
Clearly 𝐸 has the least number of down sets when it is a chain, in which case 𝒪(𝐸) is also a
chain. We prove by induction on 𝑛 that in this case 𝑛 + 1 ≥ |𝒪(𝐸 )|.
For 𝑛 = 1, the result is trivial, for 𝑛 = 2, let 𝐸 = {𝑥1 , 𝑥2 }, since 𝐸 is a chain either 𝑥1 ≤ 𝑥2
or 𝑥2 ≤ 𝑥1 . Now ℙ(𝐸) = {𝜙, {𝑥1 }, {𝑥2 }, 𝐸}; therefore down-sets of 𝐸 are ∅, {𝑥1 }↓ = {𝑥1 , 𝑥2 },
{𝑥2 }↓ = {𝑥1 , 𝑥2 } and 𝐸. Thus |𝒪(𝐸)| = 3 = 2 + 1. so the result is true for 𝑛 = 2. Assume that
the result is true for all ordered sets having cardinality less than |𝐸|. We prove that the result
holds for n also so let ∣ 𝐸 ∣= 𝑛.
Let 𝐸1 be an ordered set such that |𝐸1 | = 𝑛 − 1. By induction hypothesis the result is true for
𝐸1 i,e., | 𝒪(𝐸1 )| ≥ n. Now |𝒪(𝐸)| = |𝒪(𝐸1 )| + 1 ≥ n + 1. Thus the result is true for ∣ 𝐸 ∣= 𝑛
also.
Case 2: If 𝐸 is anti-chain.
We show by induction ∣ 𝒪(𝐸 ) ∣ ≤ 2𝑛 where 𝑛 = ∣ 𝐸 ∣. For 𝑛 = 2 we have 𝐸 = {𝑥1 , 𝑥2 } this
implies ℙ(𝐸 ) = {𝜙, {𝑥1 }, {𝑥2 }, 𝐸 }.Thus ∣ 𝒪 (𝐸) ∣= 4 = 22 =|ℙ(𝐸)|. So the result is true for
𝑛 = 2. Assume that the result holds for all ordered sets having cardinality less than |𝐸|. Let 𝐸1
be the ordered set such that |𝐸1 | = 𝑛 − 1|. By induction hypothesis ∣ 𝒪(𝐸1 ) ∣ ≤ 2𝑛−1 . Now
|𝒪(𝐸)| = |𝒪(𝐸1 )| + 1 ≤ 2n−1 + 1 ≤ 2𝑛 . Thus by induction 𝐸 has the greatest number of
down-sets when it is an anti-chain in which case 𝒪(𝐸 ) = 𝑃(𝐸 ) which is of cardinality 2n.
Thus 𝑛 + 1 ≤ ∣ 𝒪(𝐸 ) ∣ ≤ 2𝑛 .
In certain cases ∣𝒪(𝐸)∣ can be calculated by using an ingenious algorithm that we shall
describe. For this purpose, we shall denote by 𝐸\{𝑥} the ordered set obtained from 𝐸 by
deleting the element 𝑥 and related comparabilities resulting from transitivity through 𝑥.
Example 2.2.3: Consider the lattice 𝐿 given by the following Hasse diagram.
Therefore, 𝐸\{𝑥} can be obtained by deleting the element 𝑥 and related comparabilities. Thus
the Hasse diagram of 𝐸\{𝑥} is:
Definition 2.2.4: We shall also use the notation 𝑥↕ to denote the cone through 𝑥, namely the
set of elements that are comparable to 𝑥, formally;
𝑥 ↕ = 𝑥 ↓ ∪ 𝑥 ↑ = {𝑦 ∈ 𝐸 ∶ 𝑦 ∦ 𝑥}.
We now give an alternate formula for ∣ 𝒪(𝐸 ) ∣ via the concept of cone and maximal and
minimal elements as described above.
Proof: let 𝑋 be a non-empty down-set of 𝐸 and let 𝑆 = {𝑥1 , … , 𝑥𝑘 } be the set of maximal
element of 𝐸 then clearly S is unique anti-chain corresponding to down set 𝑋. (𝑥i’s are not
related to each other). Again let 𝐹 be any anti-chain in 𝐸, that is; for all𝑥, 𝑦 ∈ 𝐹 𝑥 ∥ 𝑦, that
is 𝐹 is the set of maximal elements of any subset of 𝐸. This implies 𝐹 ↓ = 𝐹. Thus every non
empty down set 𝑋 of 𝐸 is uniquely determined by an anti-chain in 𝐸. Counting ∅ as an anti-
chain, we thus see that ∣ 𝒪(𝐸 ) ∣ is the number of anti-chains in 𝐸. For any given element 𝑥 of
𝐸, this can be expressed as the number of anti-chains that contain 𝑥 plus that do not contain 𝑥.
Now if an antichain 𝐴 contains a particular element 𝑥 of 𝐸, then it contains no other element
of cone 𝑥 ↕ ; for if 𝐴 contains an𝑦 element say 𝑦 of 𝑥↕ then 𝑥 ≤ 𝑦 or 𝑦 ≤ 𝑥. Which is a
contradiction as 𝑥 ∈ 𝐴 and 𝐴 is antichain. Thus ever𝑦 anti-chain that contains 𝑥 determines a
down set of (𝐸 ∖ 𝑥 ↕ ) and conversely we have 𝐸 /𝑥 ↕ = {𝑦 ∈ 𝐸 ∶ 𝑥 ∥ 𝑦}; therefore
𝒪(𝐸 ∖ 𝑥 ↕ ) =set of down sets 𝑜𝑓 𝐸 ∖ 𝑥 ↕ .Let 𝐵 ∈ 𝒪 (𝐸 ∖ 𝑥 ↕ ); that is 𝐵 ⊆ 𝐸 /𝑥 ↕ implies;
𝐵 = {𝑦 ∈ 𝐸 ∶ 𝑦 ∉ 𝑥 ↕ }
= { 𝑦 ∈ 𝐸 ∶ 𝑦 ∉ 𝑥 ↑ or𝑦 ∉ 𝑥 ↓ }
= {𝑦 ∈ 𝐸 ∶ 𝑦 ≰ 𝑥 or 𝑥 ≰ 𝑦}.
This implies 𝐵 is antichain. Hence we see that number of antichains that contain 𝑥 is
precisely∣𝒪 (𝐸 ∖ 𝑥 ↕ ) ∣. Likewise the number of antichains that do not contain 𝑥 are precisely ∣𝒪
(𝐸∖𝑥)∣. Thus the result follows.
Example 2.2.9: We now draw the Hasse diagram of the lattice of the down sets of each of
following ordered sets;
Solution: Let 𝐸 be the ordered set given by the Hasse diagram (1).
Then 𝒪(𝐸) = {∅, {𝑎}, {𝑎, 𝑏}, {𝑎, 𝑏, 𝑐}}; thus the Hasse diagram of down sets of is given as;
Let 𝐹 be the ordered set given by Hasse diagram (2)
Then 𝒪(𝐹) = {𝜙, {𝑎}, {𝑎, 𝑏}, {𝑎 , 𝑏 , 𝑐}, {𝑎, 𝑏, 𝑐, 𝑑, 𝑒 }, {𝑎, 𝑑}}
thus the Hasse diagram o𝑓 down sets of 𝐹 is;
Example 2.2.10: If (𝑃1 ; ≤1 )and (𝑃2 ; ≤2 ) are the ordered sets represented by the diagrams (a)
and
(b) respectively. Draw the Hasse diagram of the down sets of 𝑃1 ∪ 𝑃2 .
Note that 𝑃1 ∪ 𝑃2 = {𝑎 , 𝑏 , 𝑥 , 𝑦 , 𝑧}. We have 𝑎↓ = {𝑎}, 𝑏 ↓ = {𝑎, 𝑏}, 𝑥 ↓ = {𝑥}, 𝑦 ↓ = {𝑥, 𝑦},
𝑧 ↓ = {𝑥, 𝑧}. The down set lattice 𝒪(𝑃1 ∪ 𝑃2 ) is {𝜙, {𝑥}, {𝑎}, {𝑎, 𝑏}, {𝑥, 𝑧}, {𝑦, 𝑧}}. Thus the
required Hasse diagram is;
2.3 Sublattices
As we have seen in the previous section that important sub-structures of an ordered set are the
down-sets and the principal down sets. In this section we consider another type of substructure
of semilattices.
Example 2.3.3: If 𝑉 is a vector space and if 𝑆𝑢𝑏 𝑉 denotes the set of subspaces of 𝑉, then
( 𝑆𝑢𝑏 𝑉; ⊆) is easily seen to be an ordered set under inclusion. Suppose 𝐴, 𝐵 ∈ 𝑆𝑢𝑏 𝑉 we have
𝐴 ∩ 𝐵 ⊆ 𝐴 and 𝐴 ∩ 𝐵 ⊆ 𝐵, therefore 𝐴 ∩ 𝐵 is the lower bound of {𝐴, 𝐵}. Suppose 𝑊 be any
subspace of 𝑉 such that 𝑊 ⊆ 𝐴 and 𝑊 ⊆ 𝐵 then 𝑊 ⊆ 𝐴 ∩ 𝐵; therefore 𝐴 ∩ 𝐵 is the biggest
subspace that is contained in both 𝐴 and 𝐵. Thus 𝐴 ∩ 𝐵 = 𝑖𝑛𝑓{𝐴, 𝐵}. Therefore 𝑆𝑢𝑏 𝑉 is a
meet subsemilattice of the lattice ℙ(𝑉).
Example 2.3.4: For every ordered set 𝐸, the lattice 𝒪(𝐸 ) of down sets of 𝐸 is the sublattice
of the lattice ℙ(𝐸 ), since 𝒪(𝐸 ) ⊆ ℙ(𝐸 ) and for any 𝐴, 𝐵 ∈ 𝒪 (𝐸 ); clearly 𝐴 ∩ 𝐵 ⊆ 𝐸. Let
𝑥 ∈ 𝐴 ∩ 𝐵 and 𝑦 ∈ 𝐸 with the property that 𝑦 ≤ 𝑥 then 𝑥 ∈ 𝐴 and 𝑥 ∈ 𝐵 and 𝑦 ∈ 𝐸 with 𝑦 ≤
𝑥. Since 𝐴 and 𝐵 are down sets, by definition of down-set 𝑦 ∈ 𝐴 and 𝑦 ∈ 𝐵; this gives 𝑦 ∈ 𝐴 ∩
𝐵. Thus 𝐴 ∩ 𝐵 ∈ 𝒪 (E). Likewise we can show that 𝐴 ∪ 𝐵 ∈ 𝒪(𝐸 ). Thus 𝒪(𝐸 ) is a sublattice
of ℙ(𝐸 ).
Definition 2.3.5: By an ideal of a lattice, we shall mean a sublattice 𝐽 of 𝐿 that is also a down
set. Dually by a filter of 𝐿 we mean a sublattice that is also an upset.
Next we prove that the set of ideals of a lattice is again a lattice.
Theorem 2.3.6: If 𝐿 is a lattice then ordered by the set inclusion, the set 𝔗 (𝐿) of ideals of 𝐿
forms a lattice in which the lattice operations are given by
in𝑓{𝐽, 𝐾) = 𝐽 ∩ 𝐾;
𝑠𝑢𝑝 {𝐽, 𝐾} = {𝑥 ∈ 𝐿 : there exists 𝑗 ∈ 𝐽 and 𝑘 ∈ 𝐿 𝑥 ≤ 𝑗 ∨ 𝑘}.
Note: From the above theorem although the ideal lattice 𝔗(𝐿) is a meet subsemilattice of 𝒪(𝐿)
since intersection of two ideals is again an ideal. It is not sublattice since union of two ideals
need not be an ideal. This situation in which a subsemilattice of a given lattice 𝐿 that is not a
sublattice of 𝐿 can also form a lattice with respect to the same order as 𝐿, is quite common in
lattice theory. Another instance of this has been seen before in Example 2.1.21 where the set
Sub V of subspaces of a vector space V forms a lattice in which 𝑖𝑛𝑓{𝐴, 𝐵} = 𝐴 ∩ 𝐵and
𝑠𝑢𝑝{𝐴, 𝐵} = 𝐴 + 𝐵, so that (𝑆𝑢𝑏 𝑉 ; ⊆) forms a lattice that is a∩-subsemilattice, but not a
sublattice, of(𝑃(𝑉); ⊆). As we shall now see, afurther instance is provided by a closure
mapping on a lattice.
Proof: Suppose that 𝑓: 𝐿 → 𝐿 is a closure, let 𝑥 ∈ 𝐼𝑚𝑓 then 𝑥 = 𝑓(𝑦) for some 𝑦 ∈ 𝐿. Since
𝑓 is isotone so we obtain 𝑓(𝑥) = 𝑓 2 (𝑦) = 𝑓(𝑦) (by definition of closure mapping); this gives
𝑓(𝑥) = 𝑥. Consequently we see that; 𝐼𝑚𝑓 = { 𝑥 ∈ 𝐿 ∶ 𝑓(𝑥) = 𝑥}. If 𝑎, 𝑏 ∈ 𝐼𝑚 𝑓 then
𝑓(𝑎) = 𝑎 and 𝑓(𝑏) = 𝑏. Since 𝑓 is isotone and 𝑓 = 𝑓 2 ≥ 𝑖𝑑𝐿 ,
We have 𝑓(𝑎) ∧ 𝑓(𝑏) = 𝑎 ∧ 𝑏 ≤ 𝑓(𝑎 ∧ 𝑏); ( since 𝑓 = 𝑓 2 ≥ 𝑖𝑑𝐿 ) (1)
Also 𝑎 ∧ 𝑏 ≤ 𝑎 and 𝑎 ∧ 𝑏 ≤ 𝑏 and 𝑓 is isotone, implies 𝑓(𝑎 ∧ 𝑏) ≤ 𝑓(𝑎) and 𝑓(𝑎 ∧ 𝑏) ≤
𝑓(𝑏).
This gives 𝑓(𝑎 ∧ 𝑏) ≤ 𝑓(𝑎) ∧ 𝑓(𝑏) (by connecting lemma). (2)
Combining (1) and (2) we get 𝑓(𝑎 ∧ 𝑏) = 𝑎 ∧ 𝑏. Thus 𝑎 ∧ 𝑏 ∈ 𝐼𝑚𝑓. It follows that Im𝑓 is a
meet subsemilattice of 𝐿.
As for the supremum in 𝐼𝑚𝑓 of 𝑎, 𝑏 ∈ 𝐼𝑚𝑓, we have 𝑎 ≤ 𝑎 ∨ 𝑏 and 𝑏 ≤ (𝑎 ∨ 𝑏), since 𝑓 is
isotone so 𝑓(𝑎) ≤ 𝑓(𝑎 ∨ 𝑏) and 𝑓(𝑏) ≤ 𝑓(𝑎 ∨ 𝑏), this gives 𝑓(𝑎) ∨ 𝑓(𝑏) = 𝑎 ∨ 𝑏 ≤ 𝑓(𝑎 ∨ 𝑏)
and so 𝑓(𝑎 ∨ 𝑏) ∈ 𝐼𝑚𝑓 is an upper bound of {𝑎, 𝑏}. Suppose now that 𝑐 = 𝑓(𝑐) ∈ 𝐼𝑚 𝑓 is
any other upper bound of {𝑎, 𝑏} in Im𝑓, then 𝑎 ≤ 𝑐 and 𝑏 ≤ 𝑐; this gives 𝑎 ∨ 𝑏 ≤ 𝑐. By
isotonicity of 𝑓 we obtain 𝑓(𝑎 ∨ 𝑏) ≤ 𝑓(𝑐) = 𝑐. Thus in the subset 𝐼𝑚𝑓, the upper bound
𝑓(𝑎 ∨ 𝑏) is less than or equal to every upper bound of {𝑎, 𝑏}. Consequently 𝑠𝑢𝑝 {𝑎, 𝑏} exists
in Im𝑓 and is 𝑓(𝑎 ∨ 𝑏 ).
Definition 2.4.1: If 𝐿 and 𝑀 are join semilattices then 𝑓: 𝐿 → 𝑀 is said to be a join morphism
if 𝑓( 𝑥 ∨ 𝑦 ) = 𝑓(𝑥) ∨ 𝑓(𝑦) for all 𝑥, 𝑦 ∈ 𝐿.
Dually if 𝐿 and 𝑀 are meet semilattices then 𝑓: 𝐿 → 𝑀 is said to be a meet morphism if
𝑓( 𝑥 ∧ 𝑦 ) = 𝑓(𝑥) ∧ 𝑓(𝑦) for all 𝑥, 𝑦 ∈ 𝐿.
Definition 2.4.2: If 𝐿 and 𝑀 are lattices then 𝑓: 𝐿 → 𝑀 is a lattice morphism if it is both a join
morphism and a meet morphism.
Theorem 2.4.4: If 𝐿 and 𝑀 are join semilattices then every residuated mapping 𝑓: 𝐿 → 𝑀 is a
complete join morphism.
Proof: Suppose that (𝑥𝛼 )𝛼∈𝐼 is a family of elements of 𝐿 such that 𝑥 = ⋁𝛼∈𝐼 𝑥𝛼 exists in 𝐿, for
each 𝛼 ∈ 𝐼 . We have 𝑥 ≥ 𝑥𝛼 and since 𝑓 is residuated implies 𝑓 is isotone, thus 𝑓(𝑥) ≥
𝑓(𝑥𝛼 ) for each 𝛼 ∈ 𝐼; if 𝑦 ≥ 𝑓(𝑥𝛼 ) for each 𝛼 ∈ 𝐼, then by the fact that 𝑓 + is isotone we
have 𝑓 + (𝑦) ≥ 𝑓 + (𝑓(𝑥𝛼 )). Since 𝑓 is residuated 𝑓 + ⃘𝑓 ≥ 𝑖𝑑𝐿 therefore 𝑓 + (𝑦) ≥ 𝑥𝛼 for each
𝛼 ∈ 𝐼 and so 𝑓 + (𝑦) ≥ ∨𝛼∈𝐼 𝑥 𝛼 = 𝑥, this implies 𝑓 + (𝑦) ≥ 𝑥, since 𝑓 is residuated therefore
𝑦 ≥ 𝑓( 𝑓 + (𝑦)) ≥ 𝑓(𝑥). Thus we see that ⋁𝛼∈𝐼(𝑥)𝛼 exists and is 𝑓(𝑥).
Definition2.4.5: We shall say that lattices 𝐿 and 𝑀 are isomorphic if they are isomorphic as
ordered sets.
Theorem 2.4.6: Lattices 𝐿 and 𝑀 are isomorphic if and only if there is a bijection 𝑓: 𝐿 → 𝑀
that is a ∨-morphism.
Example 2.4.7: Let 𝐿 be lattice, then every isotone mapping from 𝐿 to 𝑀 is a lattice morphism
if and only if 𝐿 is a chain.
Theorem 2.5.3ꓽ Ever𝑦 complete lattice has a top and bottom element.
Proofꓽ Let 𝐿 be a complete lattice then by definition any arbitrary subset 𝑀 of 𝐿 has both
supremum and infimum. Now taking 𝐿 = 𝑀 we have 𝑠𝑢𝑝𝐿 𝐿 is the top element of 𝐿 and 𝑖𝑛𝑓𝐿 𝐿
is the bottom element.
The following lemma provides us a way of showing that a lattice is complete by only proving
that the infimum exists, saving us half the work.
Lemma 2.5.4: (Half-work Lemma) A poset 𝑃 is a complete lattice if and only if 𝑖𝑛𝑓(𝑆) exists
for every 𝑆 ⊆ 𝑃.
Note: Note that the set T in the proof may be empty. In that case 𝑎 would be the top element
of
Solution: We have already seen that ℙ(𝐸) is a bounded lattice with top element 𝐸 and bottom
element ∅. Let 𝑋 ⊆ ℙ(𝐸) and let 𝐵 be any upper bound of 𝑋 with respect to ⊆. This means
𝐶 ⊆ 𝐵 for each 𝐶 ∈ 𝑋. But then ∪𝐶∈𝑋 𝐶 ⊆ 𝐵 as well for all 𝐶 ∈ 𝑋 (let 𝑥∈∪𝐶∈𝑋 𝐶 so𝑥 = 𝑐 ́ for
some 𝑐 ́ ∈ 𝑋 and then 𝑐 ́ ⊆ 𝐵 so 𝑥 ∈ 𝐵). So the union is smaller than any other upper bound,
hence b𝑦 definition ∪𝐶∈𝑋 𝐶 = 𝑠𝑢𝑝𝑋. A dual argument can be held for the intersection, showing
that ℙ(𝐸) is a complete lattice.
Example 2.5.6: Let 𝐿 be a lattice that is formed by adding a chain (𝑄, ≤) of rationals a top
element ∞ and a bottom element −∞ then 𝐿 is bounded but is not complete.
Since 𝐿 has top element and bottom element, so it is bounded. But 𝐿 is not complete as if we
consider the set {𝑥∈ 𝑄 ∶ 𝑥 > 0 and 𝑥2≤ 2} this set has no 𝑙𝑢𝑏 as √2 is not rational.
Definition 2.5.7: Let 𝑅 be any relation on set 𝑋 then by 𝑅 𝑒 we mean the smallest equivalence
relation on 𝑋 containing 𝑅. We call it the equivalence relation generated by 𝑅.
Example 2.5.8ꓽ For every non empty set 𝐸 the set 𝐸𝑞𝑢 𝐸 of equivalence relations on 𝐸 is a
complete lattice.
The relationship between complete semilattices and complete lattices is highlighted by the
following result.
Theorem 2.5.9: A ⋀-complete ⋀-semilattice is a complete if and only if it has a top element.
Proof: Let 𝐿 be a ⋀-complete ⋀-semilattice. If 𝐿 is complete then 𝑠𝑢𝑝𝐿 𝐿 is the top element of
𝐿. Conversely suppose that 𝐿 be a ⋀-complete ⋀-semilattice with the top element 1. Let 𝑋 =
{𝑥𝛼 ∶ 𝛼 ∈ 𝐴 } be a non empty subset of 𝐿. We show that sup𝐿𝑋 exists. Since 𝐿 has a top element
1 therefore the set of upper bounds 𝑋 ↑ of 𝑋 is non empty. Let 𝑋 ↑ = {𝑚𝛽 | 𝛽 ∈ 𝐵} . Since 𝐿 is
⋀-complete therefore ⋀𝛽∈𝐵 𝑚𝛽 exists, since 𝑚𝛽 is an upper bound of 𝑋 for all 𝛽 ∈ 𝐵. We
have 𝑥α ≤ 𝑚𝛽 for all 𝛼 ∈ 𝐴 𝑎𝑛𝑑 𝛽 ∈ 𝐵. Thus it follows that 𝑥𝛼 ≤ ⋀𝑚𝛽 for all 𝛽 ∈ 𝐵 and for
every 𝑥α ∈𝑋 whenever ⋀𝛽∈𝐵 𝑚𝛽 ∈ 𝑋 ↑ and ⋀𝛽∈𝐵 𝑚𝛽 ≤ 𝑚𝛽 . Hence ∧𝛽∈𝐵 𝑚𝛽 is the supremum
of 𝑋 in 𝐿. Hence 𝐿 is complete.
Example 2.5.10: Let 𝐸 be an infinite set and let 𝑃𝑓 (𝐸) be the set of all finite subsets of 𝐸. We
have already shown that (𝑃𝑓 (𝐸), ⊆) is an ordered set. Let 𝑋 be any arbitrary subset of 𝑃𝑓 (𝐸)
then 𝑋 is finite. Since every finite set is bounded so 𝑋 is bounded. Let 𝐵 be any lower bound
of 𝑋, then 𝐵 ⊆ 𝐶 for all 𝐶 ∈ 𝑋, but then 𝐵 ⊆∩𝐶∈𝑋 𝐶. Since B is arbitrary, so every lower bound
of 𝑃𝑓 (𝐸)is smaller than ∩𝐶∈𝑋 𝐶. Hence by definition it is the greatest lower bound. Therefore
𝑃𝑓 (𝐸)is a finite ∩ −lattice ordered by set inclusion, so it is meet complete. Now 𝑃𝑓 (𝐸) ∪ 𝐸 is
a meet complete lattice with top element E. So by (Theorem 2.5.9) 𝑃𝑓 (𝐸) ∪ 𝐸 is complete.
Example 2.5.11: If 𝐺 is a group and let 𝑆𝑢𝑏 𝐺 be the set of all subgroups of 𝐺, ordered b𝑦 the
set inclusion. Let 𝐻, 𝐾 ∈ 𝑆𝑢𝑏 𝐺 then H∩ 𝐾 ∈ 𝑆𝑢𝑏 𝐺, since 𝐻 ∩ 𝐾 ⊆ 𝐻 and 𝐻 ∩ 𝐾 ⊆ 𝐾,
showing that 𝐻 ∩ 𝐾 is the lower bound of 𝐻 and 𝐾. Let 𝑊 be any other lower bound of 𝐻 and
𝐾 such that 𝑊 ⊆ 𝐻 and 𝑊 ⊆ 𝐾, implies 𝑊 ⊆ 𝐻 ∩ 𝐾. So 𝐻 ∩ 𝐾 is the 𝑔𝑙𝑏 of 𝑆𝑢𝑏 𝐺. Hence
𝑆𝑢𝑏 𝐺 is a ∩-semilattice. To show that it is also meet complete let 𝐻𝜆 ∶ 𝜆 ∈ 𝐴} be any arbitrary
family of subgroups then ∩𝜆∈𝐴 𝐻𝜆 is also a subgroup and ∩𝜆∈𝐴 𝐻𝜆 =∧𝜆∈𝐴 𝐻𝜆 . thus 𝑆𝑢𝑏 𝐺 is a
meet complete meet semilattice with top element 𝐺, so by (Theorem 2.5.9) 𝑆𝑢𝑏 𝐺 is a complete
lattice.
Example 2.5.12: Consider the lattice {𝑁 ∪ {0} ; | }. Since every natural number divides 0, so
0 is the top element and 1 divides every natural number so 1 is bottom element. If 𝑋 is any non
empty subset of 𝑁 ∪ {0}, then as already seen in (Example 2.1.21) 𝑖𝑛𝑓𝑁 𝑋 exists and is the
greatest common devisor of elements of 𝑋. So 𝑁 ∪ {0} is meet complete meet semilattice with
top element 0. Thus by (Theorem 2.5.9) { 𝑁 ∪ {0} ; |} is a complete lattice.
Definition 2.5.14: We call two sets equipotent if and only if there exists one-to-one function
from one set onto the other.
Next we give lattice theoretic proof of one of the well-known Theorem of set theory namely as
the Schroeder Bernstein Theorem.
Theorem 2.5.15: (Schroeder Bernstein Theorem) If 𝐸 and 𝐹 are the sets and if there are
injections 𝑓: 𝐸 → 𝐹 and 𝑔: 𝐹 → 𝐸 then 𝐸 and 𝐹 are equipotent.
Proof: We use the notation 𝑖𝑋 : 𝑃(𝑋) → 𝑃(𝑋) to denote the antitone mapping that sends every
subset of 𝑋 to its compliment in 𝑋. Consider the mapping 𝜓: 𝑃(𝐸) → 𝑃(𝐸) by
𝜓 = 𝑖𝐸 ∘ 𝑔→ ∘ 𝑖𝐹 ∘ 𝑓.→
Since 𝑓 and 𝑔 are isotone, so also is 𝜓. Since 𝑃(𝐸) is a complete lattice and 𝜓: 𝑃(𝐸) → 𝑃(𝐸)is
isotone, so by Fixed Point Theorem there exists 𝐺 ⊆ 𝐸 such that 𝜓(𝐺) = 𝐺, and therefore,
𝑖𝐸 (𝐺) = (𝑔→ ∘ 𝑖𝐹 ∘ 𝑓 → )(𝐺). This situation may be summarized pictorially
Now since 𝑓 and 𝑔 are injective by hypothesis this configuration shows that we can define a
bijection ℎ: 𝐸 → 𝐹 b𝑦 the prescription;
𝑓(𝑥) 𝑖𝑓 𝑥 ∈ 𝐺;
ℎ(𝑥) = {
the unique element of 𝑔← (𝑥)𝑖𝑓𝑥 ∉ 𝐺 ,
Case 3: If 𝑥 ∉ 𝐺 and 𝑥ˊ ∉ 𝐺, then both𝑥 ∈ 𝑖𝐸 (𝐺)and𝑥ˊ ∈ 𝑖𝐸 (𝐺), this implies there exists 𝑦ˊ ∈
𝐺 such that 𝑥 = 𝑖𝐸 (𝑦) and𝑥ˊ = 𝑖𝐸 (𝑦ˊ), then 𝑥 = 𝑦 and 𝑥ˊ = 𝑦ˊ which is not possible.
So by case 1 we conclude that ℎ is injective.
Proof: We first prove that 𝐼𝑚𝑓 is a ⋀-complete ⋀-semilattice. To see this suppose that 𝑓: 𝐿 →
𝐿 is a closure, let 𝑥 ∈ 𝐼𝑚𝑓 then 𝑥 = 𝑓(𝑦) for some𝑦 ∈ 𝐿, since 𝑓 is isotone, so 𝑓(𝑥) = 𝑓 2 (𝑦).
But since 𝑓 is a closure so 𝑓 2 (𝑦) = 𝑓(𝑦) = 𝑥. Thus 𝑓(𝑥) = 𝑥. Therefore Im𝑓 = {𝑥 ∈
𝐿: 𝑓(𝑥) = 𝑥} the set of fixed points of 𝑓. Now suppose that 𝐶 ⊆ 𝐼𝑚𝑓. Since Im𝑓 ⊆ 𝐿 and 𝐿 is
complete, therefore 𝐼𝑛𝑓𝐿 𝐶 exists. So let 𝑎 = 𝑖𝑛𝑓𝐿 𝐶. By definition of infimum for all 𝑥 ∈ 𝐶 we
have 𝑎 ≤ 𝑥; since 𝑓 is isotone therefore 𝑓(𝑎) ≤ 𝑓(𝑥) = 𝑥 (since 𝑥 ∈ 𝐼𝑚𝑓). Thus 𝑓(𝑎) is the
lower bound of 𝐶. Since 𝑎 is the greatest lower bound so we have 𝑓(𝑎) ≤ 𝑖𝑛𝑓𝐿 𝐶 = 𝑎; also
since 𝑎 ∈ 𝐶 this implies 𝑎 ∈ 𝐿 therefore 𝑓(𝑎) = 𝑓 2 (𝑎) ≥ 𝑎 (by definition of closure).
Therefore (𝑎) = 𝑎 , which implies 𝑎 ∈ 𝑖𝑚𝑓. Thus 𝐼𝑚𝑓 is ∧-complete. Since 𝐿 is complete it
has a top element 1, so 𝑓(1) ∈ 𝐿 this implies 𝑓(1) ≤ 1 (since 1 is the top element). Also
since 𝑓is closure, therefore by definition 𝑓 ≥ 𝑖𝑑𝐿 ; this implies 𝑓(1) ≥ 1. Thus 𝑓(1) = 1
so 1 ∈ 𝐼𝑚𝑓. Thus 𝐼𝑚𝑓 is a ⋀-complete ⋀-semilattice with top element 1. Therefore by
Theorem 2.5.7, 𝐼𝑚𝑓 is a complete lattice. Suppose now that 𝐴 ⊆ 𝐼𝑚𝑓. If 𝑎 = 𝐼𝑛𝑓𝐿 𝐴, then from
above we have 𝑎 = 𝑓(𝑎) ∈ 𝐼𝑚𝑓.If now 𝑦 ∈ 𝑖𝑚𝑓 is such that 𝑦 ≤ 𝑥 for every 𝑥 ∈ 𝐴 then 𝑦 ≤
𝑎. Consequently we have 𝑎 = 𝑖𝑛𝑓𝐼𝑚 𝑓 𝐴. Therefore 𝐼𝑛𝑓𝐼𝑚𝑓 𝐴 = 𝐼𝑛𝑓𝐿 𝐴.
Now let 𝑆𝑢𝑝𝐿𝐴 = 𝑏 and 𝑏 ∗ = 𝑆𝑢𝑝𝐼𝑚 𝑓 𝐴. We show that 𝑏 ∗ = 𝑓(𝑏). Since 𝐼𝑚𝑓 is complete,
we have 𝑏 ∗ ∈ 𝐼𝑚𝑓so𝑏 ∗ = 𝑓(𝑏 ∗ ) and since 𝑏 ∗ ≥ 𝑥 for every 𝑥 ∈ 𝐴 we have 𝑏 ∗ ≥ sup𝐿 𝐴 =
𝑏.Thus by using the fact that 𝑓 is isotone, we have 𝑏 ∗ = 𝑓(𝑏 ∗ ) ≥ 𝑓(𝑏). But 𝑏 ≥ 𝑥 for every𝑥 ∈
𝐴. Therefore by isotonicity of 𝑓 we have 𝑓(𝑏) ≥ 𝑓(𝑥) = 𝑥 for all 𝑥 ∈ 𝐴 (since 𝑥 ∈ 𝐼𝑚𝑓) so
we also have 𝑓(𝑏) ≥ 𝑠𝑢𝑝𝐼𝑚𝑓 𝐴= 𝑏 ∗ . This implies 𝑏 ∗ = 𝑓(𝑏) as asserted.
Remark 2.5.18: We now proceed to describe an important application of above theorem. For
this purpose, given an ordered set 𝐸 consider the mapping 𝜗: 𝑃(𝐸) → 𝑃(𝐸) given by 𝜗(𝐴) =
𝐴↓ and the mapping 𝜑: 𝑃(𝐸) → 𝑃(𝐸) by 𝜑(𝐴) = 𝐴↑ . If 𝐴, 𝐵 ∈ 𝑃(𝐸) with 𝐴 ⊆ 𝐵 then clearly
every lower bound of 𝐵 is a lower bound of 𝐴 whence 𝐵 ↓ ⊆ 𝐴↓ . Hence 𝜗 is antitone. Dually,
so is 𝜑. Now every element of 𝐴 is clearly a lower bound of the set of upper bounds of 𝐴
whence 𝐴 ⊆ 𝐴↑↓ and therefore id𝑃(𝐸) ≤ 𝜗𝜑. Dually every element of A is an upper bound of
the set of lower bounds of 𝐴 so 𝐴 ⊆ 𝐴↑↓ and therefore id𝑃(𝐸) ≤ 𝜑𝜗. Consequently we see that
(𝜗, 𝜑) establish Galois connection on 𝑃(𝐸) we shall focus on the associated closure𝐴 ⟼
𝐴↑↓ for this purpose we shall also require the following facts.
Theorem 2.5.19: Let 𝐸 be an ordered set. If (𝐴𝛼 )𝛼∈𝐼 is a family of subsets of 𝐸 then;
(⋃𝛼∈𝐼 𝐴𝛼 )↑ = ⋂𝛼∈𝐼 𝐴↑𝛼 and (⋃𝛼∈𝐼 𝐴𝛼 )↓ = ⋂𝛼∈𝐼 𝐴↓𝛼 .
Proof : Since 𝐸 is an ordered set, so if 𝐸 does not have a top element or bottom element we
begin by adjoining whichever of these bounds is missing. Then without loss of generality we
may assume that 𝐸 is a bounded ordered set.
Let 𝑓: 𝑃(𝐸) → 𝑃(𝐸) be the closure mapping given by 𝑓(𝐴) = 𝐴↑↓ .Then by previous theorem
𝐿 = 𝐼𝑚𝑓 is a complete lattice. We have 𝑓({𝑥}) = {𝑥}↑↓ = 𝑥 ↓ for all 𝑥 ∈ 𝐸. Now suppose
that𝑥 ≤ 𝑦 and let 𝑧 ∈ 𝑥 ↓ , if and only if 𝑧 ≤ 𝑥; if and only if 𝑧 ≤ 𝑦,if and only if 𝑧 ∈ 𝑦 ↓ . So 𝑥 ≤
𝑦if and only if 𝑥 ↓ ⊆ 𝑦 ↓ if and only if 𝑓({𝑥})=𝑓({𝑦}). It follows that 𝑓 induces an embedding
𝑓′: 𝐸 → 𝐿 given by𝑓 ′ (𝑥) = 𝑓({𝑥}) = {𝑥}↑↓ = 𝑥 ↓ . Suppose now that 𝐴 = {𝑥𝛼 : 𝛼 ∈ 𝐼} ⊆ 𝐸. If
𝑎 = ⋀𝛼∈𝐼 𝑥𝛼 then clearly 𝑎↓ = ⋂𝛼∈𝐼 𝑥𝛼↓ so that 𝑓´(𝑎) = ⋂𝛼∈𝐼 𝑓´(𝑥𝛼 ), that is existing infima are
preserved.
Suppose now that 𝑏 = ⋁𝛼∈𝐼 𝑥𝛼 exists. Since 𝑦 ≥ 𝑏;
if and only if for all 𝑦 ∈ 𝑥𝛼↑ = {𝑥𝛼 }↑↓↑ ;
↑
if and only if 𝑦 ∈ ⋂𝛼∈𝐼{𝑥𝛼 }↑↓↑ = (⋃𝛼∈𝐼{𝑥𝛼 }↑↓ ) ; (by Theorem 2.5.19)
↑
we see that 𝑏 ↑ = (⋃𝛼∈𝐼{𝑥𝛼 }↑↓ ) . Consequently,
↑↓
𝑓´(𝑏) = {𝑏}↑↓ = (⋃𝛼∈𝐼{𝑥𝛼 }↑↓ )
= 𝑓(𝑠𝑢𝑝 ℙ(𝐸) {{𝑥𝛼 }↑↓ |𝛼 ∈ 𝐼}
= 𝑠𝑢𝑝𝐼𝑚 𝑓 {{𝑥𝛼 }↑↓ | 𝛼 ∈ 𝐼} (by Theorem 2.16)
= sup𝐼𝑚𝑓 {𝑓´(𝑥𝛼 ) | 𝛼 ∈ 𝐼}.
So that existing suprema are also preserved.
Definition 2.5.22: The complete lattice 𝐿 = 𝐼𝑚𝑓 = {𝐴↑↓ |𝐴 ∈ 𝑃(𝐸)} in the (Theorem 2.5.20)
is called the Dedekind-MacNeille completion of 𝐸.
Chapter 3
Definition 3.1.4: Let 𝐿 be a lattice, then 𝐿 is said to be modular if it satisfies the modular law
that is;
for all 𝑎, 𝑏, 𝑐, ∈ 𝐿 𝑎 ≥ 𝑐 implies 𝑎 ∧ (𝑏 ∨ 𝑐) = (𝑎 ∧ 𝑏) ∨ 𝑐
or
for all 𝑎, 𝑏. 𝑐, ∈ 𝐿 𝑎 ≤ 𝑐 implies 𝑎 ∨ (𝑏 ∧ 𝑐 ) = (𝑎 ∨ 𝑏) ∧ 𝑐.
𝑀3 is modular, it however not satisfies distributive law. To see this note that in the diagram of
𝑀3 we have;
𝑎 ∧ (𝑏 ∨ 𝑐) = 𝑎 ∧ 1 = 1 and (𝑎 ∧ 𝑏) ∨ (𝑎 ∧ 𝑐) = 0 ∨ 0 = 0 and 𝑎 ≠ 0 so 𝑀3 is not
modular. The smallest lattice which is not modular is pentagon (𝑁5 ) shown below;
Lemma 3.1.6: For all lattices 𝐿 and for all 𝑎, 𝑏, 𝑐 ∈ 𝐿 let 𝑣 = 𝑎 ∧ (𝑏 ∨ 𝑐) and 𝑢 = (𝑎 ∧ 𝑏) ∨
𝑐 . Then 𝑣 > 𝑢 implies 𝑣 ∧ 𝑏 = 𝑢 ∧ 𝑏 and𝑣 ∨ 𝑏 = 𝑢 ∨ 𝑏.
Theorem 3.1.9: (𝑁5 Theorem) A lattice 𝐿 is modular if and only if it does not contain a
sublattice isomorphic to 𝑁5 .
Example 3.1.10: The set of normal subgroups of any group 𝐺 forms a modular lattice.
Proof: Let 𝒩-Sub 𝐺 denotes the set of normal subgroups of 𝐺.It is trivial to show that 𝒩-Sub
𝐺 is a poset under set containment. We first show that 𝒩-Sub forms a lattice with;
𝐺 1 ∧ 𝐺2 = 𝐺1 ⋂ 𝐺2 and
𝐺1 ∨ 𝐺2 = {𝑔1 𝑔2 : 𝑔1 ∈ 𝐺1 and 𝑔2 ∈ 𝐺2 }.
Where 𝐺1 , 𝐺2 are any two members of 𝒩-Sub 𝐺. Now for all 𝑥 ∈ 𝐺1 ∩ 𝐺2 and for all 𝑔 ∈
𝐺, we have 𝑥 ∈ 𝐺1 and 𝑔 ∈ 𝐺 implies 𝑔𝑥𝑔−1 ∈ 𝐺1 and 𝑥 ∈ 𝐺2 and 𝑔 ∈
−1
𝐺 implies 𝑔𝑥𝑔 ∈ 𝐺2 .
Thus for all 𝑥 ∈ 𝐺1 ⋂ 𝐺2 and for all 𝑔 ∈ 𝐺 we have 𝑔𝑥𝑔−1 ∈ 𝐺1 ⋂ 𝐺2 . Also 𝐺1 ⋂ 𝐺2 ⊆ 𝐺1 and
𝐺1 ∩ 𝐺2 ⊆ 𝐺2 ; therefore 𝐺1 ⋂ 𝐺2 is the lower bound of 𝐺1 and 𝐺2 . Let W be any other lower
bound of 𝐺1 and 𝐺2 then 𝑊 ⊆ 𝐺1 and 𝑊 ⊆ 𝐺2 which implies 𝑊 ⊆ 𝐺1 ∩ 𝐺2 . Therefore 𝐺1 ∧
𝐺2 = 𝐺1 ⋂ 𝐺2 . Now let 𝑔 ∈ 𝐺 and 𝑥 ∈ 𝐺1 𝐺2 then 𝑥 = 𝑔1 𝑔2 for some 𝑔1 ∈ 𝐺1 and 𝑔2 ∈ 𝐺2 .
Thus 𝑔𝑥𝑔−1 = 𝑔(𝑔1 𝑔2 )𝑔−1 = (𝑔𝑔1 𝑔−1 )(𝑔𝑔2 𝑔−1 ) ∈ 𝐺1 𝐺2 ; thus 𝐺1 𝐺2 ∈ 𝒩-Sub𝐺. This
proves the required claim. To prove that 𝒩-𝑆𝑢𝑏 𝐺 is a modular lattice. We shall show that for
𝐺1 , 𝐺2 𝑖𝑛 𝒩- Sub 𝐺 such that 𝐺2 ⊆ 𝐺1 implies 𝐺1 ⋂(𝐺2 𝐺3 ) = 𝐺1 (𝐺2 ⋂𝐺3 ).
Clearly 𝐺2 (𝐺1 ⋂𝐺3 ) ⊆ 𝐺1 ⋂(𝐺2 𝐺3 ) (since every lattice satisfies modular inequality)
To prove the reverse inclusion let 𝑥 ∈ 𝐺1 ⋂(𝐺2𝐺3). This gives 𝑥 ∈ 𝐺 1 and 𝑥 ∈ 𝐺 2𝐺3 which
implies that 𝑥 = 𝑔1 and 𝑥 = 𝑔2 𝑔3 for some 𝑔1 ∈ 𝐺 1, 𝑔2 ∈ 𝐺 2 and 𝑔3 ∈ 𝐺 3. Thus 𝑔1 = 𝑔2 𝑔3 ∈
𝐺 1 which implies 𝑔3 = 𝑔2 -1𝑔1 ∈ 𝐺 1. Also 𝑔3 ∈ 𝐺 3; hence 𝑔3 ∈ 𝐺 1⋂𝐺3 , so 𝑥 = 𝑔2 𝑔3 ∈
𝐺 2(𝐺 1⋂ 𝐺 3) therefore 𝐺 1 ∩ (𝐺 2𝐺 3) = 𝐺 2(𝐺 1⋂𝐺3), showing that 𝒩-Sub𝐺 is modular.
Theorem 3.2.1: A lattice 𝐿 is modular if and only if the ideal lattice 𝔗(𝐿) is modular.
Proof: suppose 𝐿 is modular then every sublattice of 𝐿 is also modular. Since 𝔗 (𝐿) the set of
ideals of 𝐿 is a sublattice of 𝐿, therefore it is modular. Conversely suppose to the contrary that
𝐿 is not modular. Let 𝑎, 𝑏, 𝑐 ∈ 𝐿 , 𝑎 ≤ 𝑏 and let (𝑎 ∧ 𝑐) ∨ 𝑏 + 𝑎 ∧ (𝑐 ∨ 𝑏) be the lattice
generated by 𝑎, 𝑏, 𝑐, with 𝑎 ≥ 𝑏. Therefore the sublattice 𝔗(𝐿) of 𝐿 generated by 𝑎, 𝑏 and 𝑐
must be homomorphic image of pentagon. As if any two of the five elements 𝑎 ∧ 𝑐, 𝑎 ∨ 𝑐, 𝑎 ∧
(𝑏 ∨ 𝑐), 𝑏 ∨ 𝑐 and 𝑐 are identified under a homomorphism, then so are (𝑎 ∧ 𝑐) ∨ 𝑏 and 𝑎 ∧
(𝑏 ∨ 𝑐). Consequently, the five elements are distinct in 𝐿 and they form a pentagon. Which is
contradict- ion showing that 𝐿 is modular lattice.
Theorem 3.2.2: (Shearing Identity) A lattice 𝐿 is modular if and only if for all 𝑥, 𝑦, 𝑧 ∈ 𝐿
𝑥 ∧ (𝑦 ∨ 𝑧) = 𝑥 ∧ [(𝑦 ∧ (𝑥 ∨ 𝑧) ∨ 𝑧] (4)
Proof: We first prove that modularity implies shearing; We use the fact that 𝑥 ∨ 𝑧 ≥ 𝑧.
We have [𝑦 ∧ (𝑥 ∨ 𝑧)] ∨ 𝑧 = 𝑧 ∨ (𝑦 ∧ (𝑥 ∨ 𝑧) (since join operation is commutative)
= (𝑧 ∨ 𝑦) ∧ (𝑥 ∨ 𝑧) (by modularity of 𝐿)
= (𝑦 ∨ 𝑧) ∧ (𝑥 ∨ 𝑧) (since join operation is commutative).
Now consider the RHS of (4)
𝑥 ∧ [(𝑦 ∧ (𝑥 ∨ 𝑧) ∨ 𝑧] = 𝑥 ∧ [ (𝑦 ∨ 𝑧) ∧ (𝑥 ∨ 𝑧)] = 𝑥 ∧ (𝑦 ∨ 𝑧) (by using modularity)
and the fact that 𝑥 ∨ 𝑧 ≥ 𝑧. Hence (1) holds.
Conversely suppose that 𝐿 satisfies shearing identity. We show that 𝐿 is modular.
Let 𝑥, 𝑦, 𝑧 ∈ 𝐿 then 𝑥 ∧ (𝑦 ∨ 𝑧) ≥ (𝑥 ∧ 𝑦) ∨ 𝑧 holds for all lattices (by modular inequality).
Now we have 𝑥 ∧ (𝑦 ∨ 𝑧) = 𝑥 ∧ [(𝑦 ∧ (𝑥 ∨ 𝑧) ∨ 𝑧] (using (4))
≤ [(𝑦 ∧ (𝑥 ∨ 𝑧) ∨ 𝑧] (by using 𝑥 ∧ 𝑦 ≤ 𝑥)
= [𝑦 ∧ (𝑥 ∨ (𝑧 ∨ 𝑧)) (since join operation is associative)
≤ 𝑦 ∧ (𝑥 ∨ 𝑧) (using modular inequality)
= (𝑥 ∧ 𝑦) ∨ 𝑧 (since meet operation is commutative).
Thus 𝑥 ∧ (𝑦 ∨ 𝑧) = (𝑥 ∧ 𝑦) ∨ 𝑧 and so 𝐿 is modular.
Proof: Let 𝐿 be a modular lattice and let 𝑎, 𝑏, 𝑐 ∈ 𝐿 such that 𝑎 ≥ 𝑏,𝑎 ∨ 𝑐 = 𝑏 ∨ 𝑐 and𝑎 ∧
𝑐 = 𝑏 ∧ 𝑐. Then:
𝑎 = 𝑎 ∧ (𝑎 ∨ 𝑐) (since 𝑎 ≤ 𝑎 ∨ 𝑐)
= 𝑎 ∧ (𝑏 ∨ 𝑐 ) (because 𝑎 ∨ 𝑐 = 𝑏 ∨ 𝑐)
= 𝑎 ∧ (𝑐 ∨ 𝑏) (since join operation is associative)
= (𝑎 ∧ 𝑐) ∨ 𝑏 (by modularity of 𝐿)
= (𝑏 ∧ 𝑐) ∨ 𝑏 (since 𝑎 ∧ 𝑐 = 𝑏 ∧ c)
=𝑏 (since (𝑏 ∧ 𝑐) ≤ 𝑏).
This gives 𝑎 = 𝑏.
Conversely suppose that 𝐿 is a lattice satisfying the conditions stated in the theorem. Let
𝑎, 𝑏, 𝑐 ∈ 𝐿 and 𝑎 ≥ 𝑏. We can easily verify the following relations and their duals.
𝑎 ∧ (𝑏 ∨ 𝑐) = 𝑎 ∧ (𝑐 ∨ 𝑏) ≥ 𝑏 ∨ (𝑎 ∧ 𝑐) and
(𝑎 ∧ (𝑏 ∨ 𝑐)) ∧ 𝑐 = (𝑎 ∧ (𝑎 ∨ 𝑐)) ∧ 𝑐 = 𝑎 ∧ 𝑐. (7)
Also 𝑎 ∧ 𝑐 = (𝑎 ∧ 𝑐) ∧ 𝑐 ≤ (𝑏 ∨ (𝑎 ∧ 𝑐)) ∧ 𝑐 = 𝑏 ∧ 𝑐 ≤ 𝑎 ∧ 𝑐. Hence
(𝑏 ∨ (𝑎 ∧ 𝑐)) ∧ 𝑐 = 𝑎 ∧ 𝑐. (8)
Since 𝑏 ≤ 𝑎 the dual of the relation (7) is (𝑏 ∨ (𝑎 ∧ 𝑐)) ∨ 𝑐 = 𝑏 ∨ 𝑐 and the dual of relation
(8) is (𝑎 ∧ (𝑏 ∨ 𝑐)) ∨ 𝑐 = 𝑏 ∨ 𝑐. Thus we have (𝑎 ∧ (𝑏 ∨ 𝑐)) ∧ 𝑐 = (𝑏 ∨ (𝑎 ∧ 𝑐)) ∧ 𝑐 and
(𝑎 ∧ (𝑏 ∨ 𝑐) ) ∧ 𝑐 = (𝑏 ∨ (𝑎 ∧ 𝑐)) ∨ 𝑐. Hence the assumed property implies that 𝑎 ∧
(𝑏 ∨ 𝑐) = 𝑏 ∨ (𝑎 ∧ 𝑐). So 𝐿 is modular.
In this section we introduce an important subclass of modular lattices, namely that of the
distributive lattices. These were the first to be considered in the earliest of the investigation.
Here we discuss various examples of distributive lattices, the criteria whether the lattice is
distributive and the important results which relates modular and distributive lattices.
The converse of above proposition is not true. For example, the diamond lattice given below is
already seen to be modular. We show that this lattice is not distributive.
Remark 3.3.3: Distributivity can be defined either by (1) or by (2) from (lemma 3.1.3). In other
words 𝐿 is distributive if and only if dual of (𝐿𝐷 ) is so. An application of duality principal shows
that 𝐿 is modular if and only if 𝐿D is so.
Proof: We know that ℙ(𝐸) is a bounded lattice with top element 𝐸 and the bottom element ∅.
Let 𝘈, 𝘉, 𝘊 ∈ ℙ(𝐸) and let 𝑥 ∈ 𝐴 ∩ (𝐵 ∪ 𝐶);
if and only if 𝑥 ∈ 𝐴 and 𝑥 ∈ 𝐵 ∪ 𝐶
if and only if 𝑥 ∈ 𝐴 and 𝑥 ∈ 𝐵 or 𝑥 ∈ 𝐴 and 𝑥 ∈ 𝑐
if and only if 𝑥 ∈ (𝐴 ∩ 𝐵)or 𝑥 ∈ (𝐴 ∩ 𝐶)
if and only if 𝑥 ∈ (𝐴 ∩ 𝐵) ∪ (𝐴 ∩ 𝐶).
Thus 𝐴 ∩ (𝐵 ∪ 𝐶) = (𝐴 ∩ 𝐵) ∪ (𝐴 ∩ 𝐶). So ℙ(𝐸) is a distributive lattice.
Example 3.3.6: If 𝐸 is an ordered set then the lattice 𝒪(𝐸) of down sets of 𝐸 is a distributive
lattice.
Proof: Since 𝒪(𝐸) ⊆ ℙ(𝐸) is a sublattice of a ℙ(𝐸) and ℙ(𝐸) is a distributive lattice. Thus by
(Proposition 3.3.4) 𝒪(𝐸) is distributive.
Example 3.3.9: If 𝐷 is a distributive lattice and 𝐸 is a non-empty set then the set 𝐷𝐸 of
mappings 𝑓: 𝐸 → 𝐷 ordered by 𝑓 ⊆ 𝑔, if and only if (for all 𝑥, 𝑦 ∈ 𝐸) 𝑓(𝑥) ≤ 𝑔(𝑥) is a
distributive lattice.
Proof: We have already proved that 𝐷𝐸 is a lattice with respect to this order. So we only need
to verify the distributivity of 𝐷𝐸 . Since 𝐷 is distributive; for all 𝑥 ∈ 𝐸, 𝑓(𝑥), 𝑔(𝑥), ℎ(𝑥) ∈ 𝐷.
We have
𝑓(𝑥) ∧ (𝑔(𝑥) ∨ ℎ(𝑥)) = {𝑓(𝑥) ∧ 𝑔(𝑥)} ∨ {𝑓(𝑥) ∧ ℎ(𝑥)}
if and only if 𝑓 ∧ (𝑔 ∨ ℎ) = (𝑓 ∧ 𝑔) ∨ (𝑓 ∧ ℎ).
Hence 𝐷𝐸 is distributive.
Theorem 3.3.10: Prove that a lattice 𝐿 is distributive if and only if for all 𝑥, 𝑦, 𝑧 ∈ 𝐿;
(𝑥 ∧ 𝑦) ∨ (𝑦 ∧ 𝑧) ∨ (𝑧 ∧ 𝑥) = (𝑥 ∨ 𝑦) ∧ (𝑦 ∨ 𝑧) ∧ (𝑧 ∨ 𝑥). (10)
Proof: Suppose that (10) holds, we show that 𝐿 is distributive. Setting 𝑥 ≥ 𝑧 in (10) the L.H.S
reduces to;
(𝑥 ∧ 𝑦) ∨ (𝑦 ∧ 𝑧) ∨ (𝑧 ∧ 𝑥) = (𝑥 ∧ 𝑦) ∨ (𝑦 ∧ 𝑧) ∨ 𝑧 (since (𝑧 ∧ 𝑥) = 𝑧)
= (𝑥 ∧ 𝑦) ∨ 𝑧 (since 𝑦 ∧ 𝑧 ≤ 𝑧).
Now consider the R.H.S of (10) we have;
(𝑥 ∨ 𝑦) ∧ (𝑦 ∨ 𝑧) ∧ (𝑧 ∨ 𝑥) = (𝑥 ∨ 𝑦) ∧ (𝑦 ∨ 𝑧) ∧ 𝑥 (since 𝑥 ≥ 𝑧)
= 𝑥 ∧ (𝑥 ∨ 𝑦) ∧ (𝑦 ∨ 𝑧) (since meet operation is associative)
= 𝑥 ∧ (𝑦 ∨ 𝑧) (since 𝑥 ≤ 𝑥 ∨ 𝑦)
Hence from (10) we have (𝑥 ∧ 𝑦) ∨ 𝑧 = 𝑥 ∧ (𝑦 ∨ 𝑧),which is a modular law. Taking L.H.S. of
(10) = 𝑢 and R.H.S = 𝑣 so that 𝑢 = 𝑣 which implies 𝑥 ∧ 𝑢 = 𝑥 ∧ 𝑣.
Now 𝑥 ∧ 𝑢 = 𝑥 ∧ [(𝑦 ∧ 𝑧) ∨ (𝑥 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧)] (by assumption)
= (𝑥 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧) (by using modularity of 𝐿)
Also 𝑥 ∧ 𝑣 = 𝑥 ∧ [(𝑥 ∨ 𝑦) ∧ (𝑦 ∨ 𝑧) ∧ (𝑧 ∨ 𝑥)] (by assumption)
= [𝑥 ∧ (𝑥 ∨ 𝑦)] ∧ [(𝑦 ∨ 𝑧) ∧ (𝑧 ∨ 𝑥)] (by using modularity of 𝐿)
= [𝑥 ∧ (𝑧 ∨ 𝑥)] ∧ (𝑦 ∨ 𝑧) (since 𝑥 ∨ 𝑦 ≥ 𝑥).
So that 𝑥 ∧ (𝑦 ∨ 𝑧) = (𝑥 ∧) ∨ (𝑥 ∧ 𝑧).
Conversely suppose that 𝐿 is distributive. Applying distributivity to R.H.S of (10) we get
(𝑥 ∨ 𝑦) ∧ (𝑦 ∨ 𝑧) ∧ (𝑧 ∨ 𝑥) = {(𝑥 ∨ 𝑦) ∧ (𝑦 ∨ 𝑧) ∧ 𝑧} ∨ {(𝑥 ∨ 𝑦) ∧ (𝑦 ∨ 𝑧) ∧ 𝑥}
= {(𝑥 ∨ 𝑦) ∧ 𝑧} ∨ {(𝑦 ∨ 𝑧) ∧ 𝑥} (by connecting lemma )
= (𝑧 ∧ 𝑥) ∨ (𝑧 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧) ∨ (𝑥 ∧ 𝑦)
= (𝑥 ∧ 𝑦) ∨ (𝑦 ∧ 𝑧) ∨ (𝑧 ∧ 𝑥).
Thus distributivity implies (10).
Theorem 3.3.11: If 𝐿 is a distributive lattice, then the ideal lattice 𝔗(𝐿) is distributive.
Definition 3.3.12: The direct product 𝑃𝑄 of two partially ordered sets 𝑃 and 𝑄 is the set of all
couples (𝑥, 𝑦) with 𝑥 ∈ 𝑃, 𝑦 ∈ 𝑄 partially ordered by the rule that (𝑥1 , 𝑦1 ) ≤ (𝑥2 , 𝑦2 ) if and
only if 𝑥1 ≤ 𝑥2 in 𝑃 and 𝑦1 ≤ 𝑦2 in 𝑄.
Proposition 3.3.13: The direct product 𝐿𝑀 of any two distributive lattices is a distributive
lattice.
Proof: For any two elements(𝑥1 , 𝑦1 ) and (𝑥2 , 𝑦2 ) in 𝐿𝑀 the elements (𝑥1 ∨ 𝑥2 , 𝑦1 ∨
𝑦2 ) contains both of the elements (𝑥1 , 𝑦1 ), (𝑥2 , 𝑦2 ), hence is an upper bound for the pair. Let
(𝑢, 𝑣) be any upper bound for the pair (𝑥1 , 𝑦1 ) , (𝑥2 , 𝑦2 ) then 𝑢 ≥ 𝑥1 , 𝑥2 implies 𝑢 ≥ 𝑥1 ∨ 𝑥2 .
Likewise we have 𝑣 ≥ 𝑦1 ∨ 𝑦2 . Hence (𝑥1 ∨ 𝑦1 , 𝑥2 ∨ 𝑦2 ) = (𝑥1 , 𝑦1 ) ∨ (𝑥2 , 𝑦2 ). Dually we
can show that;
(𝑥1 ∧ 𝑥2 , 𝑦1 ∧ 𝑦2 ) = (𝑥1 , 𝑦1 ) ∧ (𝑥2 , 𝑦2 ). Which proves that 𝐿𝑀 is a lattice.
To prove distributivity, let 𝑥, 𝑦, 𝑧 ∈ 𝐿𝑀 then we have 𝑥 = (𝑥1 , 𝑦1 ), 𝑦 = (𝑥2 , 𝑦2 ) , 𝑧 =
(𝑥3 , 𝑦3 ) where 𝑥1 , 𝑥2 , 𝑥3 ∈ 𝐿 and 𝑦1 , 𝑦2 , 𝑦3 ∈ 𝑀.
Now, 𝑥 ∧ (𝑦 ∨ 𝑧) = (𝑥1, 𝑦1 ) ∧ [(𝑥2 , 𝑦2 )∨(𝑥3 , 𝑦3 )] (by assumption)
= (𝑥1, 𝑦1)∧(𝑥2 ∨ 𝑥3 , 𝑦2 ∨ 𝑦3 ) (by definition)
= (𝑥1 ∧ (𝑥2 ∨ 𝑥3 ), 𝑦1 ∧ (𝑦2 ∨ 𝑦3 ) (by definition)
= [(𝑥1 , 𝑦1 ) ∧ (𝑥2 , 𝑦2 )] ∨ [(𝑥1, 𝑦1 )∧(𝑥3 , 𝑦3 )] (since 𝐿, 𝑀are distributive)
= (𝑥 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧).
So 𝐿𝑀 is distributive.
Proposition 3.3.14: A lattice 𝐿 is distributive if and only if
𝑥 ∨ (𝑦 ∧ 𝑧) ≥ (𝑥 ∨ 𝑦) ∧ 𝑧 for all 𝑥, 𝑦, 𝑧 ∈ 𝐿.
Theorem 3.3.15: A lattice 𝐿 is distributive if and only if it has no sublattice of either of the
forms given below. Equivalently, 𝐿 is distributive if and only if 𝑧 ∧ 𝑥 = 𝑧 ∧ 𝑦 and 𝑧 ∨ 𝑥 = 𝑧 ∨
𝑦 implies 𝑥 = 𝑦.
Proof: Observe first that the two statements are equivalent. In fact if 𝑥 ∧ 𝑧 = 𝑦 ∧ 𝑧 and 𝑥 ∨ 𝑧 =
𝑦 ∨ 𝑧 with 𝑥 ≠ 𝑦 then the two lattices shown above arise from the cases 𝑥‖⃦𝑦.
Now suppose that 𝐿 is distributive and that there exists 𝑥, 𝑦, 𝑧 ∈ 𝐿 such that 𝑥 ∧ 𝑧 = 𝑦 ∧ 𝑧
and
𝑥 ∨ 𝑧 = 𝑦 ∨ 𝑧 then we have;
𝑥 = 𝑥 ∧ ( 𝑥 ∨ 𝑧) (since 𝑥 ≤ 𝑥 ∨ 𝑧)
= 𝑥 ∧ (𝑦 ∨ 𝑧) (since 𝑧 ∨ 𝑥 = 𝑧 ∨ 𝑦)
= (𝑥 ∧ 𝑦) ∨ (𝑥 ∧ 𝑧) (by distributivity of 𝐿)
= (𝑥 ∧ 𝑦) ∨ (𝑦 ∧ 𝑧) (since 𝑥 ∧ 𝑧 = 𝑦 ∧ 𝑧)
= 𝑦 ∧ (𝑥 ∨ 𝑧) (by distributivity of 𝐿)
(𝑦
= 𝑦 ∧ ∨ 𝑧) (since 𝑧 ∨ 𝑥 = 𝑧 ∨ 𝑦)
= 𝑦. (since 𝑦 ∨ 𝑧 ≥ 𝑦)
Consequently, 𝐿 has no sublattice of either of the forms.
Conversely if 𝐿 has no sublattice of either of the above forms then by the theorem (3.1.6) 𝐿
must be modular.
Given 𝑎, 𝑏, 𝑐 ∈ 𝐿, we define 𝑎⋆ = (𝑏 ∨ 𝑐) ∧ 𝑎, 𝑏 ⋆ = (𝑐 ∨ 𝑎) ∧ 𝑏, 𝑐 ⋆ = (𝑎 ∨ 𝑏) ∧ 𝑐, then
𝑎⋆ ∧ 𝑐 ⋆ = [(𝑏 ∨ 𝑐) ∧ 𝑎] ∧ (𝑎 ∨ 𝑏) ∧ 𝑐 = (𝑏 ∨ 𝑐) ∧ (𝑎 ∧ 𝑐) = [(𝑏 ∨ 𝑐) ∧ 𝑐] ∧ 𝑎 = 𝑎 ∧ 𝑐.
Similarly 𝑏 ⋆ ∧ 𝑐 ⋆ = 𝑏 ∧ 𝑐 and 𝑎⋆ ∧ 𝑏 ⋆ = 𝑎 ∧ 𝑏
Now let 𝑑 = (𝑎 ∨ 𝑏) ∧ (𝑏 ∨ 𝑐) ∧ (𝑐 ∨ 𝑎) then;
𝑎⋆ ∨ 𝑐 ⋆ = 𝑎⋆ ∨ [(𝑎 ∨ 𝑏) ∧ 𝑐] (by assumption)
⋆
= (𝑎 ∨ 𝑐) ∧ (𝑎 ∧ 𝑏) (by using modularity of
𝐿)
= [((𝑏 ∨ 𝑐) ∧ 𝑎) ∨ 𝑐] ∧ (𝑎 ∨ 𝑏) ( since 𝑎⋆ = (𝑏 ∨ 𝑐) ∧ 𝑎)
= [(𝑏 ∨ 𝑐) ∧ (𝑎 ∨ 𝑐)] ∧ (𝑎 ∨ 𝑏) = 𝑑
By symmetry we deduce that 𝑎 ∨ 𝑐 ⋆ = 𝑎⋆ ∨ 𝑏 ⋆ = 𝑏 ⋆ ∨ 𝑐 ⋆ = 𝑑.
⋆
𝑐 ⋆ ∨ 𝑎⋆ ∨ (𝑏 ∧ 𝑐) = 𝑑 ∨ (𝑏 ∧ 𝑐) = 𝑑 and
We now observe that { ⋆
𝑐 ∧ [𝑎⋆ ∨ (𝑏 ∧ 𝑐) = (𝑐 ⋆ ∧ 𝑎⋆ ) ∨ (𝑏 ∧ 𝑐) = (𝑎 ∧ 𝑏) ∨ (𝑏 ∧ 𝑐)
𝑐 ⋆ ∨ 𝑏 ⋆ ∨ (𝑏 ∧ 𝑐) = 𝑑 and
and by symmetry {
𝑐 ⋆ ∧ [𝑏 ⋆ ∨ (𝑎 ∧ 𝑐)] = (𝑎 ∧ 𝑐) ∨ (𝑏 ∧ 𝑐).
By the hypothesis we deduce that 𝑎 ⋆ ∨ (𝑏 ∧ 𝑐) = 𝑏 ⋆ ∨ (𝑎 ∧ 𝑐) whence;
𝑎⋆ ∨ (𝑏 ∧ 𝑐) = 𝑎 ⋆ ∨ (𝑏 ∧ 𝑐) ∨ 𝑏 ⋆ ∨ (𝑎 ∧ 𝑐) = 𝑎⋆ ∨ 𝑏 ⋆ =
𝑑.
It follows from this that(𝑎 ∨ 𝑏) ∧ 𝑐 = 𝑐 ⋆ = 𝑐 ⋆ ∧ 𝑑 = 𝑐 ⋆ ∧ (𝑎⋆ ∨ (𝑏 ∧ 𝑐) = (𝑐 ⋆ ∧ 𝑎⋆ ) ∨ (𝑏 ∧
𝑐) = (𝑎 ∧ 𝑐) ∨ (𝑏 ∧ 𝑐). Thus 𝐿 is distributive.
Solution: We know that (𝑁;|) is a lattice with 𝑠𝑢𝑝 {𝑚, 𝑛} = 𝑙𝑐𝑚{𝑚, 𝑛} and 𝑖𝑛𝑓 (𝑚, 𝑛) =
ℎ𝑐𝑓{𝑚, 𝑛}. To show that 𝑁 is distributive we use above theorem. Let 𝑥, 𝑦, 𝑧 ∈ 𝑁 such that 𝑥 ∨
𝑦 = 𝑧 ∨ 𝑦 and 𝑥 ∧ 𝑦 = 𝑧 ∧ 𝑦 which implies 𝑙𝑐𝑚{𝑥, 𝑦} = 𝑙𝑐𝑚(𝑧, 𝑦) and gcd{𝑥, 𝑦} =
gcd {𝑧, 𝑦}, implies;
𝑥𝑦 𝑧𝑦
=
gcd {𝑥, 𝑦} gcd {𝑥, 𝑦}
Proposition 3.3.16: Every lattice is distributive if and only if, for all ideals 𝐼, 𝐽 ∈ 𝐿;
𝐼 ∨ 𝐽 = {𝑖 ∨ 𝑗: 𝑖 ∈ 𝐼 , 𝑗 ∈ 𝐽}.
Lemma 3.4.1: Let (𝐿 , ≤ ) be a lattice with universal upper and lower bounds 0 and 1 . For
any element 𝑎 ∈ 𝐿 ; 𝑎 ∨ 1 = 𝑎 , 𝑎 ∧ 1 = 𝑎 and 𝑎 ∨ 0 = 𝑎 , 𝑎 ∧ 0 = 0 .
Complemented elements:
Definition 3.4.2: If 𝐿 is a bounded lattice then we say that 𝑦 ∈ 𝐿 is a complement of 𝑥 ∈ 𝐿 if
𝑥 ∧ 𝑦 = 0 and 𝑥 ∨ 𝑦 = 1. In this case we say 𝑥 is complemented element of 𝐿.
Since meet and join operations are commutative, therefore 𝑥 ∧ 𝑦 = 0 if and only if 𝑦 ∧
𝑥 = 0 and 𝑥 ∨ 𝑦 = 1 if and only if 𝑦 ∨ 𝑥 = 1. Thus by definition complement is symmetric
in 𝑥 and 𝑦, so that 𝑦 is complement of 𝑥 if and only if 𝑥 is complement of y. Thus we
conclude that every complement of a complemented element is itself complemented.
the first of which is non modular and second is modular but not distributive. The elements 𝑥
and y are complements of z thus in this case 𝑧 has two complements. In general complement
may not be unique. Also from above lemma we have 0 ∧ 1 = 0 and 0 ∨ 1 = 1; which shows
that that 0 and 1 are complements of each other. It is easy to show that 1 is the only complement
of 0. In fact if 𝑐 ≠ 1 is a complement of 0 and 𝑐 ∈ 𝐿; then 0 ∧ 𝑐 = 0 and 0 ∨ 𝑐 = 1 , also
since 0 ≤ 𝑐, therefore 0 ∨ 𝑐 = 𝑐 and 𝑐 ≠ 1 leads to a contradiction. In a similar manner we
can show that 0 is the only complement of 1.
Example 3.4.5: Let 𝑉 be a vector space and consider the lattice 𝐿(𝑉) of subspaces of 𝑉. We
have seen that 𝐿(𝑉) is modular (Example 3.1.8). It is also complemented. To establish this we
observe that if 𝑊 is a subspace of 𝑉, then any basis of 𝑊 can be extended to a basis of 𝑉 by
means of a set 𝐴 = {𝑥𝛼 ∶ 𝛼 ∈ 𝐼} of elements of 𝑉. The subspace generated by 𝐴 then serves as
a complement of 𝑊 in (𝑉).
Definition 3.4.6: We say that a lattice 𝐿 is relatively complemented if every interval [𝑥, 𝑦] of
𝐿 is complemented. A complement in [𝑥, 𝑦] of 𝑎 ∈ [𝑥, 𝑦] is called relative complement of 𝑎.
Proof: Let 𝐿 be a complemented modular lattice. Given any [𝑎, 𝑏] ⊆ 𝐿 and 𝑥 ∈ [𝑎, 𝑏], let 𝑦 be
the complement of 𝑥 in 𝐿. Consider the element
𝑧 = 𝑏 ∧ (𝑎 ∨ 𝑦) = (𝑏 ∧ 𝑦) ∨ 𝑎 (since 𝐿 is modular)
Then clearly 𝑧 ≥ 𝑎 and 𝑧 ≤ 𝑏, so that 𝑧 ∈ [𝑎, 𝑏] and by modularity,
𝑥 ∧ 𝑧 = 𝑥 ∧ (𝑦 ∨ 𝑎) = (𝑥 ∧ 𝑦) ∨ 𝑎 = 0 ∨ 𝑎 = 𝑎;
𝑥 ∨ 𝑧 = 𝑥 ∨ (𝑏 ∧ 𝑦) = (𝑥 ∨ 𝑦) ∧ 𝑏 = 1 ∧ 𝑏 = 𝑏.
Thus 𝑧 is complement of 𝑥 in [𝑎, 𝑏].
Theorem 3.4.8: In a distributive lattice all complements and relative complements are unique.
Proof: We first show that if an element in a distributive lattice has a complement then this
complement is unique. Suppose that an element 𝑎 has two complements 𝑏 and 𝑐. We have
𝑏 = 𝑏∧1 (since 1 ≥ 𝑏)
= 𝑏 ∧ (𝑎 ∨ 𝑐) (since 𝑐 is complement of 𝑎)
= (𝑏 ∧ 𝑎) ∨ (𝑏 ∧ 𝑐) (by definition of distributivity)
= 0 ∨ (𝑏 ∧ 𝑐) (since 𝑏 ∧ 𝑎 = 0)
= (𝑎 ∧ 𝑐) ∨ (𝑏 ∧ 𝑐) (since 𝑎 ∧ 𝑐 = 0)
= (𝑎 ∨ 𝑏) ∧ 𝑐 (by definition of distributive lattice)
=1∧𝑐 (since 𝑎 ∨ 𝑏 = 1)
=𝑐 (since 1 ≥ 𝑐).
Given any [𝑎, 𝑏] ⊆ 𝐿 (Distributive lattice) and 𝑥 ∈ [𝑎, 𝑏]. Let 𝑝 be the complement of 𝑥 in 𝐿
and let 𝑞 be the relative complement of 𝑥. we have
𝑝 =𝑝∧𝑏 (since 𝑝 ≤ 𝑏)
= 𝑝 ∧ (𝑥 ∨ 𝑞) (since 𝑞 is the relative complement of 𝑥)
= (𝑝 ∧ 𝑥) ∨ (𝑝 ∧ 𝑞) (by distributivity of 𝐿)
= 𝑎 ∨ (𝑝 ∧ 𝑞) (since 𝑝 is relative complement of 𝑥)
= (𝑥 ∧ 𝑞) ∨ (𝑝 ∧ 𝑞) (since 𝑞 is relative complement of 𝑥)
= (𝑥 ∨ 𝑝) ∧ 𝑞 (by distributivity of 𝐿)
=𝑏∧𝑞 (since 𝑝 is relative complement of 𝑥)
=𝑞 (since 𝑞 ≤ 𝑏).
With these technical details to hand suppose now that 𝒜 is the set of atoms of 𝐿. For every 𝑥 ∈
𝐿 let 𝒜𝑥 = {𝑎 ∈ 𝒜: 𝑎 ≤ 𝑥}, and consider the mapping 𝑓: 𝐿 → ℙ(𝒜) given by the prescription
𝑓(𝑥) = 𝒜𝑥 . It is clear from (2) that 𝑓 is injective. Moreover, using (6) we see that
𝒜𝑥∨𝑦 = 𝒜𝑥 ∪ 𝒜𝑦 and so 𝑓 is a join morphism. Now clearly for 𝑝 ∈ 𝒜, we have 𝑝 ≤ 𝑥 ∧ 𝑦 if
and only if 𝑝 ≤ 𝑥 and 𝑝 ≤ 𝑦. It follows that 𝒜𝑥∧𝑦 = 𝒜𝑥 ∩ 𝒜𝑦 and so 𝑓 is a lattice morphism.
Thus 𝐿 ≃ 𝐼𝑚𝑓 where 𝐼𝑚𝑓 is a sublattice of the distributive lattice ℙ(𝒜). Consequently, 𝐿 is
distributive.
Proof Suppose that 𝐿 is complete and let 𝑁 = {𝑝𝑖 : 𝑖 ∈ 𝐼} where each 𝑝𝑖 ∈ 𝒜, let q be an atom
with 𝑞 ≤∨𝑖∈𝐼 𝑝𝑖 . Then necessarily 𝑞 = 𝑝𝑖 for some 𝑖 ∈ 𝐼. In fact, suppose that 𝑞 ≠ 𝑝𝑖 for all
𝑖 ∈ 𝐼. Then (4) gives 𝑝𝑖 ≤ 𝑞′ whence we have the contradiction 𝑞 ≤∨𝑖∈𝐼 𝑝𝑖 ≤ 𝑞′. We conclude
that 𝑁 = 𝒜𝑥 , where 𝑥 =∨𝑖∈𝐼 𝑝𝑖 . Hence 𝑓is also surjective and 𝐿 ≃ ℙ(𝒜)
Definition 3.5.4: By the width of the lattice 𝐿 we mean the supremum of the cardinalities of
the antichains in 𝐿.
Proof: Under the descending chain conditions every non-zero element of lattice contains an
atom so 𝐿 is atomic. Thus by Birkhoff Ward’s theorem 𝐿 is distributive.
Proof: (1) ⇒(2): Suppose that (1) holds, that is 𝑥 ↦ 𝑥′ is antitone. Then from 𝑥 ∧ 𝑦 ≤ 𝑥, 𝑦 we
obtain (𝑥 ∧ 𝑦)′ ≥ 𝑥′ ∨ 𝑦′ and consequently 𝑥 ∧ 𝑦 = (𝑥 ∧ 𝑦)′′ ≤ (𝑥′ ∨ 𝑦′)′. Likewise 𝑥′, 𝑦′ ≤
𝑥′ ∨ 𝑦′ gives (𝑥′ ∨ 𝑦′)′ ≤ 𝑥′′ ∧ 𝑦’’ = 𝑥 ∧ 𝑦. Hence 𝑥 ∧ 𝑦 = (𝑥′ ∨ 𝑦′)′ and consequently
(𝑥 ∧ 𝑦)′ = (𝑥′ ∨ 𝑦′)′′ = 𝑥′ ∨ 𝑦′.
(2) ⇒ (1) : Suppose that for all 𝑥, 𝑦 ∈ 𝐿 (𝑥 ∧ 𝑦)′ = 𝑥′ ∨ 𝑦′; let 𝑥 ≤ 𝑦, this gives 𝑥 ∧ 𝑦 = 𝑥
so that 𝑥′ = 𝑥′ ∨ 𝑦′ ≥ 𝑦′.
A dual proof establishes the equivalence of (1) and (3)
As for the distributivity, suppose that any one of the above conditions hold. Then we have the
𝑦 = 𝑥 ∨ (𝑥′ ∧ 𝑦); (4)
property that 𝑥 ≤ 𝑦 implies {
𝑥 = (𝑥 ∨ 𝑦′) ∧ 𝑦. (5)
In fact if 𝑥 ≤ 𝑦, then since
[𝑥 ∨ (𝑥′ ∧ 𝑦′)]′ ∨ 𝑦 ≥ [𝑥 ∨ (𝑥′ ∧ 𝑦)]′ ∨ 𝑥 ∨ (𝑥′ ∧ 𝑦)
= 𝑥 ∧ (𝑥′ ∧ 𝑦) ∨ 𝑥 ∨ (𝑥′ ∧ 𝑦)
= (𝑥′ ∧ 1) ∨ 𝑥
= 𝑥′ ∨ 𝑥 = 1.
We have since 1 is the top element therefore [𝑥 ∨ (𝑥′ ∧ 𝑦′)]′ ∨ 𝑦 ≤ 1, this gives
[𝑥 ∨ (𝑥′ ∧ 𝑦′)]′ ∨ 𝑦 = 1; and by (3),
[𝑥 ∨ (𝑥′′ ∧ 𝑦′)]′ ∧ 𝑦 = 𝑥′ ∧ (𝑥′ ∧ 𝑦)′ ∧ 𝑦 = 0
Thus 𝑦 = [𝑥 ∨ (𝑥′ ∧ 𝑦′)]′′ = 𝑥 ∨ (𝑥′ ∧ 𝑦) and so (4) holds. As for (5), using (4) we see that
𝑥 ≤ 𝑦 implies 𝑦 ′ ≤ 𝑥 ′ so by (4) 𝑥′ = 𝑦′ ∨ (𝑦′′ ∧ 𝑥) which implies 𝑥 = 𝑥′′ = 𝑦 ∧ (𝑦′ ∨ 𝑥), so
(5) holds.
We now use (4) and (5) to show that 𝐿 is distributive. For this purpose suppose that 𝑎, 𝑏, 𝑐 ∈ 𝐿
are such that 𝑎 ∨ 𝑐 = 𝑏 ∨ 𝑐 = 𝛼 and 𝑎 ∧ 𝑐 = 𝑏 ∧ 𝑐 = 𝛽. Then on the one hand 𝑎 ∨ 𝛼 ′ ∨
(𝑐 ∧ 𝛽 ′ ) = 𝑎 ∨ 𝛽 ∨ 𝛼 ′ ∨ (𝑐 ∧ 𝛽 ′ ).
Now since 𝛽 ≤ 𝑐, so by (4) 𝑐 = 𝛽 ∨ (𝛽 ′ ∧ 𝑐). Therefore 𝑎 ∨ 𝛼 ′ ∨ (𝑐 ∧ 𝛽′) = 𝑎 ∨ 𝑐 ∨ 𝛼 ′ = 𝑎 ∨
𝛼′and similarly 𝑏 ∨ 𝛼 ′ ∨ (𝑐 ∧ 𝛽′) = 1. On the other hand,
𝑎 ∧ [𝛼′ ∨ (𝑐 ∧ 𝛽′)] = 𝑎 ∧ 𝑐 ∧ 𝛽′ = 𝛽 ∧ 𝛽′ = 0.
Similarly b∧[𝛼′ ∨ (𝑐 ∧ 𝛽′)] = 0. Thus by the uniqueness of complements 𝑎 = [𝛼′ ∨
(𝑐 ∧ 𝛽′)]′ = 𝑏. Therefore, by Theorem (3.3.15) 𝐿 is distributive. The properties (2) and (3) in
the above theorem are often referred to as the de Morgan laws.
Definition 4.1.1: A Boolean algebra is a lattice 𝐵 with a greatest element 1and a smallest
element 0 such that 𝐵is both distributive and complemented.
Example 4.1.2: The power set of 𝑋, ℙ(𝑋) is our prototype for a Boolean algebra. As it turns
out it is also one of the most important Boolean algebras. The following theorem allows us to
characterize Boolean algebras in terms of the binary relations ∨ and ∧ without mention of the
fact that a Boolean algebra is a poset.
The next result proves that the Boolean algebra is an algebraic structure with respect to the
operation of join and meet.
Theorem 4.1.3: A set 𝐵 is a Boolean algebra if and only if there exist binary operations ∨ and
∧ on 𝐵 satisfying the following axioms.
(1) 𝑎 ∨ 𝑏 = 𝑏 ∨ 𝑎 and 𝑎 ∧ 𝑏 = 𝑏 ∧ 𝑎 for all 𝑎, 𝑏 ∈ 𝐵,
(2) 𝑎 ∨ (𝑏 ∨ 𝑐) = (𝑎 ∨ 𝑏) ∨ 𝑐and 𝑎 ∧ (𝑏 ∧ 𝑐) = (𝑎 ∧ 𝑏) ∧ 𝑐for𝑎, 𝑏, 𝑐 ∈ 𝐵,
(3) 𝑎 ∧ (𝑏 ∨ 𝑐) = (𝑎 ∧ 𝑏) ∨ (𝑎 ∧ 𝑐) and 𝑎 ∨ (𝑏 ∧ 𝑐) = (𝑎 ∨ 𝑏) ∧ (𝑎 ∨ 𝑐) for all 𝑎, 𝑏, 𝑐 ∈ 𝐵,
(4) There exist elements 1and0 such that 𝑎 ∨ 0 = 𝑎 and 𝑎 ∧ 1 = 𝑎 for all 𝑎 ∈ 𝐵,
for every 𝑎 ∈ 𝐵 there exists an 𝑎´ ∈ 𝐵 such that 𝑎 ∨ 𝑎´ = 1 and 𝑎 ∧ 𝑎´ = 0.
Proof: Let 𝐵 be a set satisfying (1)-(5) in the theorem. We show that 𝐵 is a Boolean algebra.
One of the idempotent laws is satisfied since
𝑎 =𝑎∨0 (since 0 ≤ 𝑎)
= 𝑎 ∨ (𝑎 ∧ 𝑎´) (using (5))
= (𝑎 ∨ 𝑎) ∧ (𝑎 ∨ 𝑎´) (using (3))
= (𝑎 ∨ 𝑎) ∧ 1 (using (5))
= 𝑎 ∨ 𝑎. (using (4))
Observe that,
1 ∨ 𝑏 = (1 ∨ 𝑏) ∧ 1 = (1 ∧ 1) ∨ (𝑏 ∧ 1) = 1 ∨ 1 = 1.
Consequently, the first of the two absorption laws holds, since
𝑎 ∨ (𝑎 ∧ 𝑏) = (𝑎 ∧ 1) ∨ (𝑎 ∧ 𝑏)
= 𝑎 ∧ (1 ∨ 𝑏)
=𝑎∧1 (using (4))
= 𝑎.
The other idempotent and absorption laws are proven similarly. Since 𝐵 also satisfies (1)-(3),
the conditions of Theorem 2.1.6 and 2.1.7 are met, therefore 𝐵 must be a lattice. Condition (4)
tells us that 𝐵 is a distributive lattice.
For 𝑎 ∈ 𝐵, 0 ∨ 𝑎 = 𝑎, hence 0 ≤ 𝑎 and 0 is the bottom element in 𝐵. To show that 1 is the
top element in 𝐵, we will first show that 𝑎 ∨ 𝑏 = 𝑏 is equivalent to 𝑎 ∧ 𝑏 = 𝑎. Since 𝑎 ∨ 1 = 𝑎
for all 𝑎 ∈ 𝐵, using the absorption laws we can determine that
𝑎 ∨ 1 = (𝑎 ∧ 1) ∨ 1 = 1 ∨ (1 ∧ 𝑎) = 1
or 𝑎 ≤ 1 for all 𝑎 in 𝐵. Finally, since we know that 𝐵 is complemented by (5), 𝐵 must be a
Boolean algebra.
Conversely suppose that 𝐵 is Boolean algebra. Let 1 and 0 be the greatest and least elements
in 𝐵 respectively. If we define 𝑎 ∨ 𝑏 and 𝑎 ∧ 𝑏 as least upper and greatest lower bounds of
{𝑎, 𝑏}, then 𝐵 is a lattice by Theorem 2.1.6 and 2.1.7, definition of distributive lattice and our
hypothesis.
Many other identities hold in Boolean algebras. Some of these are listed in the following
theorem.
Definition 4.1.6: When the underlying set 𝐵 is empty, the resulting algebra is degenerate in
the sense that it has just one element. In this case, the operations of join, meet, and
complementation are all constant, and 0 = 1. The simplest non-degenerate Boolean algebra is
discussed in example below;
Example 4.1.7: Class of all subsets of a one-element set which has just two elements, 0 (the
empty set) and 1 (the one-element set) forms non-degenerate Boolean algebra under the
operations of join and meet described by the following arithmetic tables;
˅ 0 1 ˄ 0 1
and
0 0 1 0 0 0
1 1 1 1 0 1
Notation: By 𝑅 𝑋 we shall denote the set of all functions from 𝑋 into 𝑅 (as discussed in example
4.2.) throughout.
Example 4.1.8: The set 𝑅 𝑋 forms Boolean algebra as it clearly satisfies the axioms of Theorem
4.1.2.
Theorem 4.1.10: Let 𝜑 be an isomorphism from a Boolean algebra 𝐵 into (respectively, onto)
a Boolean algebra 𝐶(with the notation given above) then:
(a) (𝜑0𝐵 ) = 0𝐶 and (1𝜑𝐵 ) = 1𝐶 .
(b) It is not necessary to assume that:
𝜑(𝑥 ∨𝐵 𝑦) = 𝜑(𝑥) ∨𝐶 (𝜑𝑦) for all 𝑥, 𝑦 in 𝐵
Alternatively, we could omit the assumption that:
(𝜑(𝑥) ∧𝐵 𝑦) = 𝜑(𝑥) ∧𝐶 𝜑(𝑦).
(c) If 𝜓 is an isomorphism from 𝐶 into (respectively, onto) a Boolean algebra 𝐷 =
(𝐷,∧𝐷 ,∨𝐷 , 0, 1𝐷 ) then the composite mapping 𝜓 ∘ 𝜑 is an isomorphism from 𝐵 into
(respectively, onto) 𝐷.
(d) The inverse mapping 𝜑 −1 is an isomorphism from the subalgebra of 𝐶 determined by 𝜑[ℬ]
onto 𝐵 and in particular if 𝜑 is onto 𝐶, then 𝜑 −1 is an isomorphism from 𝐶 onto ℬ.
(c) First, 𝜓 ∘ 𝜑 is one-one (if 𝑥 ≠ 𝑦, then 𝜑(𝑥) ≠ 𝜑(𝑦) and therefore (𝜑(𝑥)) ≠ 𝜓(𝜑(𝑦)). )
(d) Assume 𝑧 ∈ 𝜑[ℬ]. Then 𝑧 = 𝜑(𝑥) and 𝑤 = 𝜑(𝑦) for some 𝑥 and 𝑦 in 𝐵. Hence 𝑥 =
𝜑 −1 (𝑧) and 𝑦 = 𝜑 −1 (𝑤). First, if 𝑧 ≠ 𝑤, then 𝑥 ≠ 𝑦 (for if 𝑥 = 𝑦, then 𝑧 = 𝜑(𝑥) = 𝜑(𝑦)
= 𝑤). Thus 𝜑 −1 is one-one. Second, 𝜑(𝑥 ∨𝐵 𝑦) = 𝜑(𝑥) ∨𝐶 𝜑(𝑦) = 𝑧 ∨𝐶 𝑤. Hence
′
𝜑 −1 (𝑧 ∨𝐶 𝑤) = 𝑥 ∨𝐵 𝑦 = 𝜑 −1 (𝑧) ∨𝐵 𝛷 −1 (𝑤). Thirdly we have (𝑥’𝐵 ) = (𝜑(𝑥)) 𝐶 =
𝑧’𝐶 . 𝜑 −1 (𝑧’𝐶 ) = (𝜑 −1 (𝑧))’ 𝐵 .
We say that 𝐵 is isomorphic with 𝐶 if and only if there is an isomorphism from 𝐵 onto 𝐶.
From Theorem 4.3.2(d, c) it follows that, if 𝐵 is isomorphic with 𝐶 then 𝐶 is isomorphic with
𝐵 and if in addition 𝐶 is isomorphic with 𝐷 then 𝐵 is isomorphic with 𝐷. Isomorphic Boolean
algebras have in a certain sense the same Boolean structure. More precisely, this means that
any property (formulated in the language of Boolean algebras) holding for one Boolean algebra
also holds for any isomorphic Boolean algebra.
Example 4.1.11: Boolean algebras 𝑃(𝑋) and 𝑅 𝑋 are isomorphic via the mapping that takes
each subset of 𝑋 to its characteristic function.
We will show that any finite Boolean algebra is isomorphic to the Boolean algebra obtained by
taking the power set of some finite set 𝑋. We will need a few lemmas anddefinitions before we
prove this result. Let 𝐵 be a finite Boolean algebra. Recall that an element𝑎 ∈ 𝐵is an atom of
𝐵 if 𝑎 ≠ 0and 𝑎 ∧ 𝑏 = 𝑎for all non-zero 𝑏 ∈ 𝐵. Equivalently𝑎is an atom of 𝐵if there is no non-
zero 𝑏 ∈ 𝐵 distinct from 𝑎such that 0 ≤ 𝑏 ≤ 𝑎.
Lemma 4.1.13: Let 𝐵 be a finite Boolean algebra. If 𝑏 is a non-zero element of 𝐵, then there
is an atom 𝑎 in 𝐵 such that 𝑎 ≤ 𝑏.
Lemma 4.1.14: Let 𝑎 and 𝑏 be atoms in a finite Boolean algebra 𝐵 such that 𝑎 ≠ 𝑏. Then 𝑎 ∧
𝑏 = 0.
Proof: Since 𝑎 ∧ 𝑏 is the greatest lower bound of 𝑎 and 𝑏, we know that 𝑎 ∧ 𝑏 ≤ 𝑎. Hence
either 𝑎 ∧ 𝑏 = 𝑎or 𝑎 ∧ 𝑏 = 0. However if 𝑎 ∧ 𝑏 = 𝑎 then either 𝑎 ≤ 𝑏 or 𝑎 = 0. In either case
we have a contradiction because 𝑎 and 𝑏 are both atoms therefore 𝑎 ∧ 𝑏 = 0.
Lemma 4.1.15: Let 𝐵 be a Boolean algebra and 𝑎, 𝑏 ∈ 𝐵. Then following statements are
equivalent.
1. 𝑎 ≤ 𝑏,
2. 𝑎 ∧ 𝑏´ = 0,
3. 𝑎´ ∨ 𝑏 = 1.
Lemma 4.1.16: Let 𝐵 be a Boolean algebra and 𝑏 and 𝑐 be elements in 𝐵 such that 𝑏 ≰ 𝑐. Then
there exists an atom 𝑎 ∈ 𝐵 such that 𝑎 ≤ 𝑏 and 𝑎 ≰ 𝑐.
Proof: By lemma (4.1.16) 𝑏 ∧ 𝑐´ ≠ 0. Hence, there exists an atom 𝑎 such that 𝑎 ≤ 𝑏 ∧ 𝑐´.
Consequently 𝑎 ≤ 𝑏 and 𝑎 ≰ 𝑐.
Lemma 4.1.17: Let 𝑏 ∈ 𝐵 and 𝑎1 , … , 𝑎𝑛 be the atoms of 𝐵 such that𝑎𝑖 ≤ 𝑏 for all 𝑖 = 1, … , 𝑛.
Then 𝑏 = 𝑎1 ∨ … ∨ 𝑎𝑛 . Furthermore, if 𝑎, 𝑎1 , … , 𝑎𝑛 are atoms of 𝐵 such that 𝑎 ≤ 𝑏, 𝑎𝑖 ≤ 𝑏 for
all 𝑖 = 1, … , 𝑛 and 𝑏 = 𝑎 ∨ 𝑎1 ∨ … ∨ 𝑎𝑛 , then 𝑎 = 𝑎𝑖 for all 𝑖 = 1, … , 𝑛.
Proof: Let 𝑏1 = 𝑎1 ∨ … ∨ 𝑎𝑛 . Since 𝑎𝑖 ≤ 𝑏 for each 𝑖 and we know that 𝑏1 ≤ 𝑏. If we can show
that 𝑏 ≤ 𝑏1 then the lemma is true by antisymmetry. Assume that 𝑏 ≰ 𝑏1. Then there exists an
atom 𝑎 such that 𝑎 ≤ 𝑏 and 𝑎 ≰ 𝑏1. Since 𝑎 is an atom and 𝑎 ≤ 𝑏 we can deduce that 𝑎 = 𝑎𝑖
for some 𝑎𝑖 . However this is impossible since 𝑎 ≤ 𝑏1 . Therefore 𝑏 ≤ 𝑏1.
Theorem 4.1.18: Let 𝐵 be a finite Boolean algebra. Then there exists a set 𝑋 such that 𝐵 is
isomorphic to 𝑃(𝑋).
Proof: We will show that 𝐵 is isomorphic to 𝑃(𝑋), where 𝑋 is the set of atoms of 𝐵. Let 𝑎 ∈ 𝐵.
By lemma 4.1.18, we can write 𝑎 uniquely as 𝑎 = 𝑎1 ∨ … ∨ 𝑎𝑛 for 𝑎1 , … , 𝑎𝑛 ∈ 𝑋. Consequently
we can define a map 𝜙: 𝐵 → 𝑃(𝑋) by
𝜙(𝑎) = 𝜙(𝑎1 ∨ … ∨ 𝑎𝑛 ) = {𝑎1 , … , 𝑎𝑛 }.
Clearly 𝜙 is onto.
Now let 𝑎 = 𝑎1 ∨ … ∨ 𝑎𝑛 and 𝑏 = 𝑏1 ∨ … ∨ 𝑏𝑚 be elements in 𝐵, where each 𝑎𝑖 and each 𝑏𝑖
is an atom. If 𝜙(𝑎) = 𝜙(𝑏) then {𝑎1 , … , 𝑎𝑛 } = {𝑏1 , … , 𝑏𝑚 } and 𝑎 = 𝑏. Consequently 𝜙 is
injective.
The join of 𝑎 and 𝑏 is preserved by 𝜙 since
𝜙(𝑎 ∨ 𝑏) = 𝜙(𝑎1 ∨ … ∨ 𝑎𝑛 ∨ 𝑏1 , … , 𝑏𝑚 ),
= {𝑎1 , … , 𝑎𝑛 , 𝑏1 , … , 𝑏𝑚 },
= {𝑎1 , … , 𝑎𝑛 } ∪ {𝑏1 , … , 𝑏𝑚 },
= 𝜙(𝑎1 ∨ … ∨ 𝑎𝑛 ) ∪ 𝜙(𝑏1 , … , 𝑏𝑚 ),
= 𝜙(𝑎) ∪ 𝜙(𝑏).
Similarly 𝜙(𝑎 ∧ 𝑏) = 𝜙(𝑎) ∩ 𝜙(𝑏). Thus meet and join are both preserved, hence the result
follows.
Corollary 4.1.19: If 𝐵 is a finite Boolean algebra then 𝐵 has 2𝑛 elements where 𝑛 is the number
of atoms in 𝐵.
Proof: From the theorem we have 𝐵 ≃ 𝑃(𝐸) for some finite set 𝐸. Without loss of generality
we may assume that 𝐸 = {1,2, … , 𝑛}. Let 2 denote the two-element chain 0 < 1 and consider
the mapping 𝑓: 𝑃(𝐸) ⟶ 𝟐𝒏 given by 𝑓(𝑋) = (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) where
1 if 𝑖 ∈ 𝑋;
𝑥𝑖 = {
0 otherwise.
Given 𝐴, 𝐵 ∈ 𝑃(𝐸), let 𝑓(𝐴) = (𝑎1 , 𝑎2 , . . . , 𝑎𝑛 ) and 𝑓 (𝐵) = (𝑏1 , 𝑏2 , . . . , 𝑏𝑛 ). Then we have
𝐴 ⊆ 𝐵 if and only if for every 𝑖, 𝑖 ∈ 𝐴 implies 𝑖 ∈ 𝐵, which is equivalent to 𝑎𝑖 = 1 implies 𝑏𝑖 =
1, that is to 𝑎𝑖 ≤ 𝑏𝑖 , which by the definition is equivalent to 𝑓(𝐴) ≤ 𝑓(𝐵). Moreover given
any 𝑥 = (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) ∈ 𝟐𝒏 we have 𝑓(𝐶) = 𝑥, where 𝐶 = {𝑖│𝑥𝑖 = 1}. It
therefore follows by (Theorem 1.5.2) that (𝐸) ≃ 𝟐𝒏 . Hence 𝐵 ≃ 𝟐𝒏 and so has 2𝑛 elements.
Moreover, since 𝐸 has 𝑛 elements, 𝐵 ≃ (𝐸) has 𝑛 atoms.
4.2 Boolean Rings
In this section we will discuss Boolean rings and we some of its types. Also in this section
we will look at some properties of Boolean rings. This section ends with some important results
on Boolean rings.
A ring is an abstract version of arithmetic, the kind of thing we studied in school. The prototype
is the ring of integers. It consists of a universe — the set of integers — and three operations on
the universe: the binary operations of addition and multiplication. There are also two
distinguished integers zero and one.
The set of integers satisfies a number of basic laws that are familiar from school mathematics;
(2) 𝑝 · (𝑞 · 𝑟) = (𝑝 · 𝑞) · 𝑟.
(3) 𝑝 + 𝑞 = 𝑞 + 𝑝;
(4) 𝑝 · 𝑞 = 𝑞 · 𝑝.
(5) 𝑝 + 0 = 𝑝;
(6) 𝑝 · 1 = 𝑝.
(7) 𝑝 + (−𝑝) = 0.
(8) 𝑝 · (𝑞 + 𝑟) = 𝑝 · 𝑞 + 𝑝 · 𝑟;
(9) (𝑞 + 𝑟) · 𝑝 = 𝑞 · 𝑝 + 𝑟 · 𝑝.
Any set (universe) satisfying above properties is named as ring where the difference between
the ring of integers and an arbitrary ring is that, in the latter, the universe may be an arbitrary
non-empty set of elements, not just a set of integers, and the operations take their arguments
and values from this set. The commutative law for multiplication is not required to hold in an
arbitrary ring; if it does, the ring is said to be commutative. Also, a ring is not always required
to have a unit, an element 1 satisfying (6); if it does, it is called a ring with unit.
There are other natural examples of rings besides the integers. The most trivial is the ring with
just one element in its universe: zero. It is called the degenerate ring. The simplest non-
degenerate ring with unit has just two elements, zero and one. The operations of addition and
multiplication are described by the following arithmetic tables:
+ 0 1 . 0 1
0 0 1 0 0 0
1 1 0 1 0 1
Remark 4.2.1: An examination of the above tables shows that the two-element ring has several
special properties. First of all, every element is its own additive inverse, that is:
(10) 𝑝 + 𝑝 = 0.
Therefore, the operation of negation is superfluous: every element is its own negative. Rings
satisfying condition (10) are said to have characteristic 2.
Second, every element is its own square, that is;
(11) 𝑝 · 𝑝 = 𝑝.
Element with this property are called idempotent. Rings with the property that every element
is idempotent have special name as defined below:
Definition 4.2.2: A ring 𝑅 with unit is said to be Boolean ring if every element of it is
idempotent.
Example 4.2.3: The two-element ring is the simplest non- degenerate example of Boolean ring.
The condition of idempotence in the definition of a Boolean ring has quite a strong influence
on the structure of such rings. Two of its most surprising consequences are proved in the next
proposition.
Definition 4.2.4: the characteristic of a ring 𝑅 is the least positive integer 𝑛 such that 𝑛𝑥 = 0
for all 𝑥 ∈ 𝑅.
Since as we know negation in Boolean rings is the identity operation it is not necessary to use
the minus sign for additive inverse of an element of the Boolean ring.
So in case of Boolean rings, only a slight modification in set of Ring axiom is needed, the
identity (7) should be replaced by (10). From now on the official axioms for a Boolean ring
are:
(1) − (3), (5), (6), (8) − (11).
Example 4.2.6: The universe of this example consists of ordered pairs (𝑝, 𝑞) of elements of
the universe as in example 4.2.2. Let S denotes the universe then;
These equations make sense because their right sides refer to the elements and operations of 𝑅.
The Zero and unit of the ring are the pairs (0,0) and (1,1).
It is a simple matter to check that the axioms for Boolean rings are true in S. In each case, the
verification of the axioms reduces to its validity in 𝑅.
Example 4.2.7: The preceding examples can easily be generalized to each positive integer 𝑛.
The universe in this case is the set of 𝑛-termed sequences (𝑝0 , 𝑝1 , . . . , 𝑝𝑛−1 ) of element from
universe as in example 4.2.2. The sum and the product of two such 𝑛-tupples defined
coordinate-wise, just as in the case of ordered pairs as in example 4.2.4.
(𝑝0 , 𝑝1 , . . . 𝑝𝑛 ) + (𝑞0 , 𝑞1 , . . . , 𝑞𝑛−1 ) = (𝑝0 + 𝑞0 , 𝑝1 + 𝑞1 , . . . , 𝑝𝑛−1 + 𝑞𝑛−1
(𝑝0 , 𝑝1 , . . . 𝑝𝑛 ). (𝑞0 , 𝑞1 , . . . , 𝑞𝑛−1 ) = (𝑝0 . 𝑞0 , 𝑝1 . 𝑞1 , . . . , 𝑝𝑛−1 . 𝑞𝑛−1 ).
The zero and unit are the n-tuples (0, 0, . . . , 0) and (1, 1, . . . , 1)
Example 4.2.8: Let ‘𝑋’ be an arbitrary set and 𝑅 𝑋 , the set of all functions from ‘𝑋’ into 𝑅 as
defined in example 4.2.2. The elements of 𝑅 𝑋 will be called 2- valued functions on 𝑋. The
distinguished elements and the operations of 𝑅 𝑋 are defined point-wise.
This means that 0 and 1 in 𝑅 𝑋 are the constant functions defined for each 𝑥 in 𝑋 by;
0(𝑥) = 0 and 1(𝑥) = 1
Then the functions 𝑝 + 𝑞 and 𝑝. 𝑞 are defined by:
𝑝 + 𝑞(𝑥) = 𝑝(𝑥) + 𝑞(𝑥).
𝑝. 𝑞(𝑥) = 𝑝(𝑥) . 𝑞(𝑥).
The above equations make sense as their right hand sides refer to elements and operations of
𝑅 as defined in example 4.2.2.
Verifying that 𝑅 𝑋 is a Boolean ring is conceptually the same as verifying that 𝑆 (as in example
4.2.4) is a Boolean ring, but notationally it looks a bit different. Consider as an example The
verification of the distributive law (8). In the context of 𝑅 𝑋 , the left and right sides of (8)
denote functions from 𝑋 into 𝑅. It must be shown that these two functions are equal. They
obviously have the same domain 𝑋, so it suffices to check that the values of the two functions
at each element 𝑥 in the domain agree that is
{𝑝. (𝑞 + 𝑟)} (𝑥) = (𝑝. 𝑞 + 𝑝. 𝑟) (𝑥) (14)
The left and right sides of (14) evaluate to
𝑝(𝑥). {𝑞(𝑥) + 𝑟(𝑥)} and 𝑝(𝑥) . 𝑞(𝑥) + 𝑝(𝑥). 𝑟(𝑥) (15)
respectively, by the definitions of addition and multiplication in 𝑅 𝑋 . Each of these terms
denotes an element of 𝑅. Since, the distributive law holds in 𝑅, the terms in (15) are equal.
Therefore the equation (14) is true. The other Boolean axioms are verified for 𝑅 𝑋 in similar
fashion.
The next result shows that in an arbitrary ring some additional properties hold than those of
(1)-(10) as mentioned below:
Definition 4.2.10: A Boolean group 𝐵 is a group in which every element has order two (in
other words the law (10) is valid, that is for all 𝑝 ∈ 𝐵 𝑝 + 𝑝 = 0).
Theorem 4.2.11: Every Boolean group 𝐵 is commutative (that is, the commutative law (3) is
valid).
Definition 4.2.12: A zero divisor in a ring is a non-zero element 𝑝 such that 𝑝 ∙ 𝑞 = 0 for some
non-zero element 𝑞.
Theorem 4.2.13: Boolean ring with or without unit having more than two elements has zero
divisors.
Proof: Let 𝐵 be a Boolean ring having more than two elements, then there exist two distinct
non-zero elements 𝑥, 𝑦 ∈ 𝐵 then:
𝑥 + 𝑦 ≠ 0 as 𝑥 ≠ 𝑦.
Now we have following cases to consider:
Case 1: If 𝑥 ∙ 𝑦 = 0, then we are done.
Case 2: If 𝑥 ∙ 𝑦 ≠ 0, then;
(𝑥 ∙ 𝑦) ∙ (𝑥 + 𝑦) = 𝑥 ∙ 𝑦 ∙ 𝑥 + 𝑥 ∙ 𝑦 ∙ 𝑦
= 𝑥2 ∙ 𝑦 + 𝑥 ∙ 𝑦2
=𝑥∙𝑦+𝑥∙𝑦 (using 11),
= 0.
Hence 𝑥 ∙ 𝑦 is zero-divisor in this case. Thus the result follows.
Motivated by this set-theoretic example, we can introduce into every Boolean algebra
𝐴 operations of addition and multiplication very much like symmetric difference and
intersection; just define;
(3) 𝑝 + 𝑞 = (𝑝 ∧ 𝑞) ∨ (𝑝 ∧ 𝑞) and 𝑝・𝑞 = 𝑝 ∧ 𝑞.
Under these operations, together with 0 and 1 (the zero and unit of the Boolean algebra),
𝐴becomes a Boolean ring. Conversely, every Boolean ring can be turned into a Boolean algebra
with the same zero and unit; just define operations of join, meet, and complement by;
(4) 𝑝 ∨ 𝑞 = 𝑝 + 𝑞 + 𝑝・𝑞, 𝑝 ∧ 𝑞 = 𝑝・𝑞 and 𝑝’ = 𝑝 + 1.
Start with a Boolean algebra, turn it into a Boolean ring (with the same zero and unit) using the
definitions in (3), and then convert the ring into a Boolean algebra using the definitions in (4);
the result is the original Boolean algebra. Conversely start with a Boolean ring, convert it into
a Boolean algebra using the definitions in (4) and then convert the Boolean algebra into a
Boolean ring using the definitions in (3); the result is the original ring.
Theorem 4.3.1: Let (𝐵; ∧,∨, ´) be a Boolean algebra. Define a multiplication and an addition
on 𝐵 by setting:
For all 𝑥, 𝑦 ∈ 𝐵 𝑥𝑦 = 𝑥 ∧ 𝑦, 𝑥 + 𝑦 = (𝑥 ∧ 𝑦´) ∨ (𝑥´ ∧ 𝑦).
Then (𝐵; ⋅, +) is a Boolean ring.
Proof: Clearly(𝐵; ⋅) is a semigroup with an identity, namely the top element 1 of 𝐵. Moreover,
for every 𝑥 ∈ 𝐵 we have 𝑥 2 = 𝑥 ∙ 𝑥 = 𝑥 ∧ 𝑥 = 𝑥, so every element is idempotent. Now given
𝑥, 𝑦, 𝑧 ∈ 𝐵 it is easy to verify that;
(𝑥 + 𝑦) + 𝑧 = (𝑥 ∧ 𝑦´ ∧ 𝑧´) ∨ (𝑥´ ∧ 𝑦 ∧ 𝑧´) ∨ (𝑥´ ∧ 𝑦´ ∧ 𝑧) ∨ (𝑥 ∧ 𝑦 ∧ 𝑧),
which, being symmetric in 𝑥, 𝑦, 𝑧 is also equal to 𝑥 + (𝑦 + 𝑧). Since 𝑥 + 0 = (𝑥 ∧ 0´) ∨
(𝑥´ ∧ 0) = (𝑥 ∧ 1) ∨ 0 = 𝑥 and 𝑥 + 𝑥 = (𝑥 ∧ 𝑥´) ∨ (𝑥´ ∧ 𝑥) = 0 ∨ 0 = 0, we see that
(𝐵; ⋅) is an abelian group in which −𝑥 = 𝑥 for every 𝑥 ∈ 𝐵. Finally, for all 𝑥, 𝑦, 𝑧 ∈ 𝐵 we have;
𝑥𝑦 + 𝑥𝑦 = [𝑥𝑦 ∧ (𝑥𝑧)´] ∨ [(𝑥𝑦)´ ∧ 𝑥𝑧]
= [𝑥 ∧ 𝑦 ∧ (𝑥 ∧ 𝑧)´] ∨ [(𝑥 ∧ 𝑦)´ ∧ 𝑥 ∧ 𝑧]
= [𝑥 ∧ 𝑦 ∧ (𝑥´ ∨ 𝑧´)] ∨ [(𝑥´ ∨ 𝑦´) ∧ 𝑥 ∧ 𝑧]
= (𝑥 ∧ 𝑦 ∧ 𝑧´) ∨ (𝑥 ∧ 𝑦´ ∧ 𝑧)
= 𝑥 ∧ [(𝑦 ∧ 𝑧´) ∨ (𝑦´ ∧ 𝑧)]
= 𝑥(𝑦 + 𝑧).
Propositional Calculus
We now turn our attention to the applications of Boolean algebra to two-valued logic and in
particular to the calculus of propositions. Historically, lattice theory had its beginnings in the
investigations of Boole into the formation of logic.
Definition 4.4.1: By a proposition we mean a statement which in some clearly defined sense
is either true (𝑇) or false (𝐹). Thus of the two propositions
Grass is green,
Fish grows on trees,
the first is 𝑇 and the second is 𝐹. With the help of words ‘and’ (∧), ‘or’ (∨), ‘not’ (∼)
compound propositions such as
Grass is not green,
Grass is green and fish grow on trees,
can be constructed from simpler ones and truth values 𝑇 or 𝐹 of compound propositions may
be calculated from those of simpler ones of which they are composed by means of the logical
matrices defined below.
Before defining logical matrices, we discuss the notation as follows;
Notation: By the symbol ‘∼’ we denote not or the negation of any statement which we have
denoted before by symbol ′ (complementation), so we use the symbol ′ throughout instead of
‘∼’.
Definition 4.4.2: Logical matrices are to be regarded a statements of the axioms upon which
the propositional calculus is based. They are as follows:
∧ 𝑇 𝐹 ∨ 𝑇 𝐹
𝑇 𝑇 𝐹 𝑇 𝑇 𝑇
𝐹 𝐹 𝐹 𝐹 𝑇 𝐹
′
Thus ‘grass is green and fish grow on trees’ is 𝐹 because 𝐹 appears in row 𝑇 and
column 𝐹 of the 𝑇 𝐹 matrix for ∧.
In general, the truth value of compound proposition can be determined from the logical
matrices and from the truth values of the elementary propositions composing it. There are
however certain compound propositions whose truth value can be determined without a
knowledge of the truth values of the elementary propositions. For instance
Shakespeare wrote Hamlet or Shakespeare did not write Hamlet
is 𝑇 whether or not Shakespeare wrote Hamlet. In the above compound proposition we can
replace the elementary proposition ‘Shakespeare wrote Hamlet’ by any other proposition 𝑝
without altering its truth value. Thus
Similarly
These results can be calculated from the logical matrices alone. Such computations are
conveniently set out in the form of truth tables. In such a table all possible combinations of
truth values for the elementary propositions involved are tabulated to the left of the double line.
The columns to the right of the double line are then computed in succession from the logical
matrices. The truth tables for 𝑝 ∨ 𝑝′ and for 𝑝 ∧ 𝑝′ may be set down together as follows.
𝑝 𝑝′ 𝑝 ∨ 𝑝′ 𝑝 ∧ 𝑝′
𝑇 𝐹 𝑇 𝐹
𝐹 𝑇 𝑇 𝐹
Definition 4.4.4: A proposition 𝑞, such as 𝑝 ∨ 𝑝′, is said to be formally true and is called a
tautology if every proposition 𝑞 ∗ obtained from 𝑞 by replacing its elementary propositions by
arbitrary propositions is 𝑇. Correspondingly 𝑞 is said to be formally false and is called an
absurdity or a contradiction if every 𝑞 ∗ is 𝐹. We observe that this notation permits us to write
𝑇 𝑇 𝑇 𝐹 𝐹 𝐹 𝑇
𝑇 𝐹 𝐹 𝐹 𝑇 𝐹 𝐹
𝐹 𝑇 𝐹 𝑇 𝐹 𝐹 𝐹
𝐹 𝐹 𝐹 𝑇 𝑇 𝑇 𝑇
⟷ 𝑇 𝐹
𝑇 𝑇 𝐹
𝐹 𝐹 𝑇
We see that 𝑝 ⟷ 𝑞 is 𝑇 if and only if 𝑝 and 𝑞 have equal truth values whatever the truth values
of the elementary propositions composing 𝑝 and 𝑞 may be.
Definition 4.4.5: Possibly if the proposition𝑝 ⟷ 𝑞 is formally true, then in that case we call
𝑝, 𝑞equivalent propositions and write 𝑝 ≡ 𝑞.
Proof: In each case an indirect proof can be constructed. We exhibit the details for (iii) and rest
will follow by the similar argument. Suppose that 𝑝 ≢ 𝑟. Then 𝑝∗ ⟷ 𝑟 ∗ is 𝐹 for some choice
of 𝑝∗ and 𝑟 ∗ and so for this choice of𝑝∗ and𝑟 ∗ have different truth values. If we suppose 𝑝∗ is
𝑇 and 𝑟 ∗ is 𝐹, then 𝑝 ≡ 𝑞 states that 𝑝∗ ⟷ 𝑞 ∗ is 𝑇 for every 𝑞 ∗ and consequently each 𝑞 ∗ like
𝑝∗ is 𝑇. Further 𝑞 ≡ 𝑟 states that 𝑞 ∗ ⟷ 𝑟 ∗ is 𝑇from which we see that 𝑟 ∗ like 𝑞 ∗ is 𝑇 in
contradiction to the supposition that 𝑟 ∗ is 𝐹. Since a similar contradiction arises if we suppose
that 𝑝∗ is 𝐹 and 𝑟 ∗ is 𝑇. Thus validity of (iii) has been demonstrated.
𝑝1 ∧ 𝑞1 ≡ 𝑝2 ∧ 𝑞1 ≡ 𝑞1 ∧ 𝑝2 ≡ 𝑞2 ∧ 𝑝2 ≡ 𝑝2 ∧ 𝑞2 .
Remark 4.4.7: The (i), (ii), (iii) of above result show that ≡ is an equivalence relation which
we might well have denoted by =. Our object however has been to elucidate the meaning of
this kind of equality and for this purpose we think the notation ≡ more suggestive.
It is in this sense that the postulates for a distributive lattices are satisfied by interpreting ∨
and ∧ as union and intersection. For instance, the distributive law takes the form
𝑝 ∧ (𝑞 ∨ 𝑟) ≡ (𝑝 ∧ 𝑞) ∨ (𝑝 ∧ 𝑟)
and its validity can be demonstrated by showing that each side has the same truth value
whatever the truth values of 𝑝, 𝑞 and 𝑟 may be. This is done in the following truth table
𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇
𝑇 𝑇 𝐹 𝑇 𝑇 𝑇 𝐹 𝑇
𝑇 𝐹 𝑇 𝑇 𝑇 𝐹 𝑇 𝑇
𝑇 𝐹 𝐹 𝐹 𝐹 𝐹 𝐹 𝐹
𝐹 𝑇 𝑇 𝑇 𝐹 𝐹 𝐹 𝐹
𝐹 𝑇 𝐹 𝑇 𝐹 𝐹 𝐹 𝐹
𝐹 𝐹 𝑇 𝑇 𝐹 𝐹 𝐹 𝐹
𝐹 𝐹 𝐹 𝐹 𝐹 𝐹 𝐹 𝐹
In clarification of the meanings of 𝑝 ⟷ 𝑞 and 𝑝 ≡ 𝑞 we point out that there are different
categories of propositions. We can consider elementary propositions as being in the lowest
category. The statement 𝑝 ⟷ 𝑞 is in higher category since it is a proposition about propositions
namely that 𝑝, 𝑞 have the same truth value. 𝑝 ≡ 𝑞 is in a higher category still, since it is a
proposition about a proposition about propositions namely that 𝑝 ⟷ 𝑞 is formally true.
Remarks 4.4.8: 1. The equivalence relation mentioned earlier separates all propositions into
equivalence classes and we may denote by 𝑝̅ the class to which the proposition 𝑝 belongs. That
is, the propositions 𝑝 and 𝑞 belong to the same class if and only if 𝑝 ≡ 𝑞. Expressing this in
another way, 𝑝̅ = 𝑞̅ if and only if 𝑝 ≡ 𝑞.
2. The results (iv), (ix) and (x) show that the operations ´, ∧and ∨ are stable in relation to the
equivalence relation and that the equivalence relation is indeed a congruence relation which
enables us to define unambiguously the following operations on the set of equivalence classes:
𝑝̅ = 𝑞̅ if and only if 𝑝 ≡ 𝑞´,
𝑝̅ = 𝑞̅ ∩ 𝑟̅ if and only if 𝑝 ≡ 𝑞 ∧ 𝑟,
𝑝̅ = 𝑞̅ ∪ 𝑟̅ if and only if 𝑝 ≡ 𝑞 ∨ 𝑟.
3. If, for example (ix) where not true, then for some 𝑝1 , 𝑞1 , 𝑝2 , 𝑞2 we would have 𝑝1 ≡ 𝑞1 , 𝑝2 ≡
𝑞2 , 𝑝1 ∧ 𝑝2 ≢ 𝑞1 ∧ 𝑞2 with the consequence that 𝑝 ̅̅̅1 = ̅̅̅,
𝑞1 ̅̅̅ 𝑞2 ̅̅̅̅̅̅̅̅̅
𝑝2 = ̅̅̅, ̅̅̅̅̅̅̅̅̅
𝑝1 ∧ 𝑝2 ≠ 𝑞 1 ∧ 𝑞2 and we
could not define 𝑝 𝑝2 = ̅̅̅̅̅̅̅̅̅
̅̅̅1 ∩ ̅̅̅ 𝑝1 ∧ 𝑝2 without involving ambiguity.
4. We observe that the statement 𝑝 ∨ 𝑝´ ⟷ 𝑞 ∨ 𝑞´ is always 𝑇 since each side has the same
truth value 𝑇 for all 𝑝, 𝑞. Thus 𝑝 ∨ 𝑝´ ≡ 𝑞 ∨ 𝑞´ and we may denote the class to which 𝑝 ∨ 𝑝´
belongs by 𝐼. Thus
𝐼 = ̅̅̅̅̅̅̅̅ ̅ = 𝑝̅ ∪ 𝑝̅ ´.
𝑝 ∨ 𝑝´ = 𝑝̅ ∪ 𝑝´
In the same way 𝑝 ∧ 𝑝´ ≡ 𝑞 ∧ 𝑞´, since each side is always 𝐹 and we can write
𝑂=𝑝 ̅̅̅̅̅̅̅̅ ̅ = 𝑝̅ ∩ 𝑝̅ ´.
∧ 𝑝´ = 𝑝̅ ∩ 𝑝´
In fact 𝐼 is the class of all tautologies and 𝑂 is the class of all contradictions.
Theorem 4.4.10: The classes 𝑝̅, 𝑞̅ , … , 𝑂, 𝐼 of propositions form a Boolean algebra in relation
to the operations ∩, ∪ and ´ defined above.
Proof: We need not give a detailed proof for each postulate. We have already seen in (vii) that
𝑝 ∧ 𝑞 ≡ 𝑞 ∧ 𝑝 so we obtain commutative laws from
𝑝̅ ∩ 𝑞̅ = ̅̅̅̅̅̅̅
𝑝 ∧ 𝑞 = ̅̅̅̅̅̅̅
𝑞 ∧ 𝑝 = 𝑞̅ ∩ 𝑝̅ .
The other postulates for a distributive lattice are proved in a similar manner. For instance we
show by constructing a truth table that 𝑝 ≡ 𝑝 ∧ (𝑝 ∨ 𝑞) from which we get
𝑝̅ = ̅̅̅̅̅̅̅̅̅̅̅̅̅̅
𝑝 ∧ (𝑝 ∨ 𝑞) = 𝑝̅ ∩ (𝑝 ̅̅̅̅̅̅̅
∨ 𝑞 ) = 𝑝̅ ∩ (𝑝̅ ∪ 𝑞̅ )
which is absorption law. Further,
𝑝̅ ∪ 𝐼 = 𝑝̅ ∪ (𝑝̅ ∪ 𝑝̅´) = (𝑝̅ ∪ 𝑝̅) ∪ 𝑝̅´ = 𝑝̅ ∪ 𝑝̅´ = 𝐼,
and dually, 𝑝̅ ∩ 𝑂 = 𝑂. These formulae imply that 𝐼 is the top element and that 𝑂 is the bottom
element. The relations 𝑝̅ ∪ 𝑝̅ ´ = 𝐼, 𝑝̅ ∩ 𝑝̅ ´ = 𝑂 now demonstrate that the lattice is
complemented since each class 𝑝̅ has a complement 𝑝̅ ´. Thus the lattice is Boolean algebra.
Hence the result follows.
Switching Circuits
By a switching circuit we mean a piece of electrical apparatus between the terminals of which
may be one or more switches of different sorts. These switches may be hand operated, or may
be operated by a circuit itself, or by other circuits. Since we are only concerned with whether
or not a current flow in a circuit when the potential difference is applied between two of the
terminals, we do not take into account the magnitude of the current nor the magnitude of the
component resistances. At any instant the given switch 𝑎 is supposed to be either open (𝑎 = 0)
or closed (𝑎 = 1). By means of electrical relays it is possible to arrange that a number of other
switches are open when 𝑎 is open and are closed when 𝑎 is closed. We shall denote each of the
by 𝑎 so that 𝑎 really denotes a class of switches which are either simultaneously open or
simultaneously closed. Again another set of switches 𝑎′ can be operated by relays so that each
switch 𝑎′ is open when 𝑎′ when 𝑎 is closed and is closed when 𝑎 is open. In the accompanying
diagrams the lines indicate conductors while the lettered gaps in the conductors denote
switches. The boxes containing letters denote relays which may be used to operate other
switches. Fig. 1 denotes a circuit containing a single switch 𝑎 and a relay. A current flows in
this circuit only when 𝑎 = 1 and when it does so, it operates the relay which may be used to
operate other switches 𝑎 and also
to operate switches 𝑎′. The circuit of fig. 2 has two switches 𝑎 and 𝑏 in series and will be
denoted by 𝑎 ∩ 𝑏.
Since this circuit is closed if and only if 𝑎 and 𝑏 are both closed,
𝑎 ∩ 𝑏 = 1 ⟺ 𝑎 = 1 ∧ 𝑏 = 1,
{ (1)
𝑎 ∩ 𝑏 = 0 ⟺ 𝑎 = 0 ∨ 𝑏 = 0.
When the relay of this circuit operates a switch 𝑐 such that 𝑐 = 1 when 𝑎 ∩ 𝑏 = 1 and 𝑐 = 0
when 𝑎 ∩ 𝑏 = 0, it is natural to write 𝑐 = 𝑎 ∩ 𝑏. In effect, this means that not only can a single
letter denote a class of switches but a single letter or single formula can denote a class of
equivalent circuits all of which are open simultaneously or all closed simultaneously.
In a similar manner a circuit containing two switches 𝑎 and 𝑏 in parallel will be denoted by
𝑎 ∪ 𝑏 (fig. 3) and it is easy to see
𝑎 ∪ 𝑏 = 1 ⟺ 𝑎 = 1 ∨ 𝑏 = 1,
{ (2)
𝑎 ∪ 𝑏 = 0 ⟺ 𝑎 = 0 ∧ 𝑏 = 0.
As our notation suggests, the component circuits, or rather the class of equivalent circuits are
the elements of Boolean algebra. The verification of the postulates of a distributive lattice is
accomplished in exactly the same manner as was done in the propositional calculus by means
of truth tables. For instance, the two circuits in fig. 4 and fig. 5 are either simultaneously open
or
simultaneously closed, as is demonstrated by the accompanying truth table.
𝑎 𝑏 𝑐 𝑎∪𝑏 (𝑎 ∪ 𝑏) ∪ 𝑐 𝑏∪𝑐 𝑎 ∪ (𝑏 ∪ 𝑐)
0 0 0 0 0 0 0
0 0 1 0 1 1 1
0 1 0 1 1 1 1
0 1 1 1 1 1 1
1 0 0 1 1 0 1
1 0 1 1 1 1 1
1 1 0 1 1 1 1
1 1 1 1 1 1 1
Thus the circuit (𝑎 ∪ 𝑏) ∪ 𝑐 is equivalent to the circuit 𝑎 ∪ (𝑏 ∪ 𝑐). We adopt the convention
that 𝑎 = 1 denoted that the switch or circuit 𝑎 was closed but we may in fact denote the short
circuit by 1 (fig. 6). Thus we may interpret 𝑎 = 1 to mean that 𝑎 is temporarily equivalent to a
short circuit. Similarly we interpret 𝑎 = 0𝑎 is temporarily equivalent to an open circuit (fig.
6), which is labelled 0.
Since it is easily verified that 𝑎 ∪ 1 = 1, 𝑎 ∩ 0 = 0, it is clear that 1 is the top element and 0 is
the bottom element. An examination of the circuits of fig. 7 reveals that 𝑎 ∪ 𝑎′ = 1, 𝑎 ∩ 𝑎′ =
0; from which it is clear that each class 𝑎 has complement 𝑎′ . So the lattice is a Boolean algebra.
In the circuit of figs. 2 and 3 the switches 𝑎 and 𝑏 might be operated manually since they may
be operated independently. The circuit in fig.7 denoted by 𝑎 ∪ 𝑎′ = 1 reveals a different
situation for the operation of 𝑎′ is determined by that of 𝑎 and the one cannot be manipulated
independently of the other. This relationship is expressed by the formula 𝑎 ∪ 𝑎′ = 1.
We now mention some other circuits in which the relationship between the switches is
expressed by an equation in the Boolean algebra,
Example 4.4.11: Consider a circuit with two switches 𝑎, 𝑏 related by the equation
𝑎∪𝑏 =𝑎
Example 4.4.12: Another circuit of special interest contains three switches 𝑎, 𝑏, 𝑐 satisfying
𝑏 = 𝑎 ∩ (𝑏 ∪ 𝑐)
Since this relation implies 𝑎 ≥ 𝑏, this circuit is the modification of the previous one. Assume
initially that 𝑎 = 1; then the closing of 𝑐 ensures that 𝑐 = 1, 𝑏 ∪ 𝑐 = 1, 𝑏 = 𝑎 ∩ (𝑏 ∪ 𝑐) = 1, 𝑏
closes. However, 𝑏 must open immediately when open 𝑎. This is known as a lock-in circuit.
We can suppose that 𝑎 is a break switch which is normally held closed by a spring and that 𝑐
is a make switch normally held open by a spring. The switch 𝑏 is operated by a relay. To close
𝑏 we have to only press 𝑐 momentarily, but when this is done 𝑏 opens and stays open until 𝑐 is
pressed again. This circuit is illustrated in fig. 8.
The two principal objects in applying Boolean algebra to switching problem is first to design
a circuit with a prescribed function and secondly to simplify a circuit without altering its
function.
As an illustration of the first type of problem we consider the construction of a binary adder
which yields the sum of three digits 𝑎, 𝑏, 𝑐 in the binary scale. One of these digits, say 𝑐, will
be a “carry in” from a previous column. Since 𝑎, 𝑏, 𝑐 are each 0 or 1 their sum must be one of
the four integers 0,1,2,3 which in the binary scale take the forms 00, 01, 10, 11. If we
denote this sum in the binary scale by 𝑥𝑦, then 𝑥 is the “carry out” digit which is inserted
in the next column. The summation to be performed and the required value of 𝑥, 𝑦 are as
follows:
𝑦 = 1 if and only if (𝑎 = 0 ∧ 𝑏 = 0 ∧ 𝑐 = 1) ∨ (𝑎 = 0 ∧ 𝑏 = 1 ∧ 𝑐 = 0) ∨ (𝑎 = 1 ∧ 𝑏 =
0∧ 𝑐 = 0) ∨ (𝑎 = 1 ∧ 𝑏 = 1 ∧ 𝑐 = 1);
Consequently,
The problem of simplifying a circuit is largely one of reducing a given Boolean polynomial to
an equivalent expression which is simpler in form in the sense that fewer letters are required in
writing it down. Thus the first of two expressions for 𝑥 above requires 12 letters or switches
while the second only requires 6. A further alternative employing only 5 switches would be
given by the formula;
𝑥 = [𝑎 ∩ (𝑏 ∪ 𝑐)] ∪ (𝑏 ∩ 𝑐).
There are of course also certain technical considerations which must be taken into account in
determining which of two circuits should be regarded as the simpler. We consider some aspects
of the problem in next section.
Bridge circuits
Bridge circuits
The circuits discussed in the previous section have all been of the series-parallel type. If
bilateral elements are used which conduct current in both directions it may be possible to
simplify a given circuit by using by using bridge circuit such as that of fig.10. in this circuit we
suppose that when 𝑐 is closed current can flow in either direction through this switch. The
bridge circuit illustrated employs only five switches though a series-parallel circuit for 𝑥 would
require at least eight corresponding to
𝑥 = [𝑎 ∩ (𝑒 ∪ (𝑐 ∩ 𝑑))] ∪ [𝑏 ∩ (𝑑 ∪ (𝑐 ∩ 𝑒))]
or ten corresponding to the 𝑥 in the figure. The appropriate formula for such a bridge circuit
can be obtained by enumerating the possible paths of the current and by taking the union of the
Boolean functions for the different paths.
Definition 4.4.13: A device for the construction of non-series-parallel circuits is the disjunctive
tree which employs transfer switches in which the operation of 𝑎 and 𝑎′ is effected by a single
spring.
Consider the case of three variables (three classes of switches) 𝑎, 𝑏, 𝑐 and suppose that
𝑓(𝑎, 𝑏, 𝑐) is a Boolean function for the class of circuits, we find that 𝑓(𝑎, 𝑏, 𝑐) can be written
as
Now 𝑓(1,1, 𝑐), 𝑓(1,0, 𝑐), 𝑓(0,1, 𝑐), 𝑓(0,0, 𝑐) all belong to the four element lattice generated by
𝑐 which is composed of the elements 0, 𝑐, 𝑐 ′ , 1. Therefore the circuit required is realised by
marrying the disjunctive tree for 𝑎 and 𝑏 (fig.11 a) with the network of fig. 11b.
There is of course no need to include the open circuit 0 in fig.11b except for diagrammatic
purposes. Whatever the nature of 𝑓(𝑎, 𝑏, 𝑐) at most eight switches or four transfer switches are
required. By way of illustration we take,
which is the formula for the digit 𝑦 of the binary adder investigated in the previous section. We
can easily verify that
To obtain a required circuit (fig.12) it is only necessary to connect the circuits 𝑎 ∩ 𝑏 and 𝑎′ ∩
𝑏′ with 𝑐 and to connect 𝑎 ∩ 𝑏′ and 𝑎′ ∩ 𝑏 with 𝑐 ′ . The short circuit 1 in fig.11b is not required.
The circuit of fig.12 is clearly more economical the series parallel circuit of fig.9 for the same
Boolean function. The method described may be applied to any number of variables but as the
number rises the complexities of the computation rapidly increases.