Professional Documents
Culture Documents
Toc Notes PDF
Toc Notes PDF
Toc notes
UNIT I to V
PREPARED BY
S.RADHA
AP/CSE
LECTURE PLAN
NO. OF TEACHING
TOPIC REFERENCE
HOURS AIDS
T1-Ch3.8.1,R1-
Chomsky normal form 1 BB
Ch7.1.5
pumping lemma for CFL T1- Ch4.7,R1-Ch7.2 2 BB
TEXT BOOK:
1. J.E.Hopcroft, R.Motwani and J.D Ullman, ―Introduction to Automata Theory, Languages
and Computations‖, Second Edition, Pearson Education, 2003.
REFERENCES:
1. H.R.Lewis and C.H.Papadimitriou, ―Elements of the theory of Computation‖, Second
Edition, PHI, 2003.
2. J.Martin, ―Introduction to Languages and the Theory of Computation‖, Third Edition, TMH,
2003.
3. Micheal Sipser, ―Introduction of the Theory and Computation‖, Thomson Brokecole, 1997.
UNIT-I
FINITE AUTOMATA
a. Lambda calculus
3. Logical models including
a. Logic programming
4. Concurrent models including
a. Actor model
1.1.3 Definition of Automata
An open-ended computer science discipline that concerns an abstract device
called an automaton, which performs a specific and recognition function. A formal
system that represents a machine that acts according to given specification.
1.1.4 Alphabets, Strings, and Languages
1.1.4.1 Alphabets
An alphabet is a finite, non-empty set of symbols. It is denoted by ∑.
Example:
i. ∑ = {a, b}, an alphabet of 2 symbols a and b.
ii. ∑ = {0, 1, 2}, an alphabet of 3 symbols 0,1 and 2.
1.1.4.2 Strings
A string or word over an alphabet ∑ is a finite sequence of symbols
from ∑.
Example:
i If ∑ = {a, b}, then abab, aabba, aaabba are all strings over the alphabet
∑ = {a, b}.
ii If ∑ = {a}, then a, aa, aaa are all strings over the alphabet ∑ = {a}.
1.1.4.3 Languages
For any alphabet ∑, any aubset L of ∑* is a language, a set of strings from an alphabet
is called as a ‘Language’.
Example:
i. English: It is a Language over ∑ = {a, b, c, …, z}.
ii. Binary String: {0,1,01,10,0101,…} is a language over ∑ = {0, 1}.
iii. If ∑ = {a, b} is an alphabet then ∑* = { є, a, b, aa, ab, … } is a language.
For Example
Prove that root 2 is not a rational
The proof requires these facts about integers (which are the whole numbers, 0, 1, -1, 2, -
2, 3, -3, etc.).
1. For integers a, if a2 is even, then a is even. The definition of even is that c is
even if and only if there is some integer k such that c=2*k.
2. For integers a, if a 2 is odd, then a is odd. The definition of odd is that c is not
even; necessarily, this means that there is some integer k such that c=2*k+1.
3. If a number x is the ratio of two integers, then it is the ratio of two relatively
prime integers. That is, if x is a ratio of two integers, then there are some
integers a and b such that b is not 0, x=a/b, and the greatest common divisor
of a and b is 1 (1 and -1 both divide a and b, but no other integers do that).
Suppose that x is the square root of 2 and that x is also the ratio of two integers. Let a
and be b relatively prime integers such that b is not 0 and x=a/b. Then
2=x2=(a2)/(b2).
So, 2 * (b2)= a2.
Therefore, a2 is even (because it is twice another integer). By (1) above, a is even.
Thus a=2*k for some integer k, and hence 2 * (b2)= (2k)2
Therefore, 2 * (b2)= 4*(k2).
Divide by two, to discover that b2= 2*(k2).
Therefore b2 is even (because it is twice another integer). By (1) above, b is even.
Therefore, b=2*j for some integer k.
Therefore, a and b are not relatively prime, because each has 2 as a divisor and 2 is
bigger than 1 (their supposed greatest common divisor). This is a contradiction.
The contradiction means that something in our "Suppose that x is the square root of 2
and x is also the ratio of two integers" is false. So, if one part of this "and" sentence is
true, the other part must be false. In particular, if x is indeed the square root of 2, then x
is NOT the ratio of two integers. In mathematical terminology, x is irrational. "Ir" as a
prefix means "not". So x is NOT rational where rational means being the ratio of two
integers (with the denominator integer non-zero)
Proof:
The two set expressions involved are
E= R∪(S∩T) and F=(R∪S) ∩ (R∪T)
In the ‘if’ part we assume element x is in E and show it is in F. Steps in ‘if part’ are
as follows:
Step Statement Justification
No.
1 x is in R∪(S∩T) Given
2 x is in R or x is in S∩T (1) and definition of union
3 x is in R or x is in S and T (2) and definition of intersection
4 x is in R∪S (3) and definition of union
5 x is in R∪T (3) and definition of union
6 x is in (R∪S) ∩ (R∪T) (4), (5) and definition of intersection
Steps in ‘only if part’ are as follows:
Step Statement Justification
No.
1 x is in (R∪S) ∩ (R∪T) Given
2 x is in R∪S (1) and definition of intersection.
3 x is in R∪T (1) and definition of intersection.
4 x is in R or x is in S and T (2), (3) and reasoning about
unions.
5 x is in R or x is in S∩T (4) and definition of intersection.
6 x is in R∪(S∩T) (5) and definition of union.
Since we have now proved both parts of the if-and-only-if statement, the
distributive law of union over intersection is proved.
1.4 INDUCTIVE PROOFS:
There is a special form of proof called inductive ie) essential when
dealing with recursively defined objects.
Many of the most familiar inductive proofs deals with integers, but in
automata theory we also need inductive proofs about such recursively
defined concepts as trees and expression of various sorts such as
regular expressions.
1.4.1 Mathematical Induction
Mathematical induction is a way of establishing the correctness of formulas
involving an integer variable. It also applies to inequalities, algorithms and other
assertions involving an integer variable. Even more broadly it applies to algorithms
and assertions involving string variables. Let's see how it works first in the case of
a formula.
You have a formula that involves an integer variable n and you want to prove it
is true for all positive integers n. To do this you do the following two things.
(1) First you show it is true for n = 1
(2) Then you show that the truth of the formula for a particular value of n
implies the truth of the formula for n + 1
These two things imply that the formula is true for all positive integers n. The
idea is that starting with the truth for n = 1, you can apply (2) repeatedly to reach
any positive integer n.
Steps:
Step 1:
The basis where we show s(i) for a particular integer c. Usually i=0 or i=1.
Step 2:
The inductive step where we assume n ≥ i, where i is the basis
For Example
Prove, by Mathematical Induction, that
n 2n 17n 1
n 12 n 22 n 32 ... 2n 2
6
is true for all natural numbers n.
Discussion
Some readers may find it difficult to write the L.H.S. in P(k + 1). Some cannot
factorize the L.H.S. and are forced to expand everything.
For P(1),
1 3 8
4
L.H.S. = 22 = 4, R.H.S. = 6 . P(1) is true.
Assume that P(k) is true for some natural number k, that is
k 2k 17k 1
k 12 k 22 k 32 ... 2k 2
6
…. (1)
For P(k + 1) ,
k 22 k 32 ... 2k 2 2k 12 2k 22 There is a missing term in front
and two more terms at
the back.)
k 2 k 3 ... 2k 2k 1 4k 1
2 2 2 2 2
k 2k 17k 1
2k 1 3k 1
2 2
6 , by (1)
2k 1 k 7k 1 62k 1 3k 12
6 (Combine the first two terms)
2k 1 7k 2 13k 6 3k 12
6
2k 1 7k 6k 1 3k 12
6
k 1 2k 17k 6 18k 1
6
k 1 14k 2 37k 24
6
k 1 2k 37k 8 k 12k 1 17k 1 1
6 6
P(k + 1) is true.
By the Principle of Mathematical Induction, P(n) is true for all natural numbers, n.
10
11
12
13
to the next ( as defined by the transition function) is called a move. Finite state
machine or Finite Automation is the simplest type of abstract machine we consider.
Any system that is at any point of time in one of a finite number of interval state
and moves among these states in a defined manner in response to some input,
can be modeled by a finite automaton.
1.5.3 Example of finite automaton
An Example Automaton: the 11011 Sequence Detector
We begin our formal discussion of finite automata with an example discussed in
the previous chapter. This example is the 11011 sequence detector, which we
now discuss using the terms and state diagrams of finite automata theory.
Here is the state diagram of the 11011 sequence detector drawn as a finite
automaton.
The reader will note that the figure is quite different from that previously drawn
for the sequence detector. We comment on a few of the more important features of
this FA.
1. The states are labeled as q1, q2, q3, q4, q5, and q6. Theoreticians who work with
finite automata prefer to label the states with the lower-case “q”, beginning at 1
and running to the number of states.
2. The output of this automaton is not associated with the transitions, but with the
states. In this, the automaton more resembles the counters we discussed
earlier, than it does the sequence detector. We shall later note that some finite
14
automata models do have output associated with the transitions, but they are
not so common.
3. There are six states associated with the automaton, not the five as developed
earlier. This is a result of the output being associated with the states.
4. The state q6 has a double circle around it. This marks it as an accept state,
also called a final state. Finite automata may have any number of accept
states, including zero (for no accept states).
5. The FA has a start state, q1, that is indicated by having an arrow pointing at it
from nowhere. The FA starts here.
6. The FA takes a string, one character at a time, as input and outputs either
“accept” or “reject”. If the FA ends in an accept state at the end of
processing the string, the FA accepts the string; otherwise it rejects it. The term
“detector” seems not to be used when discussing finite automata.
7. There is no concept of a real clock, such as we used in our counters and
sequence detectors. These FA are mathematical models.
8. This finite automata is said to accept any string terminating in 11011, rather
than detecting the sequence 11011. We now call this an “11011 acceptor”.
1.5.4 Variant Definitions of Automata
Automata are defined to study useful machines under mathematical formalism.
So, the definition of an automaton is open to variations according to the "real world
machine", which we want to model using the automaton. People have studied
many variations of automata. The most standard variant, which is described above,
is called a deterministic finite automaton. The following are some popular variations
in the definition of different components of automata.
Input
Finite input: An automaton that accepts only finite sequence of symbols. The
above introductory definition only encompasses finite words.
States
Finite states: An automaton that contains only a finite number of states. The
above introductory definition describes automata with finite numbers of states.
15
Stack memory: An automaton may also contain some extra memory in the form
of a stack in which symbols can be pushed and popped. This kind of automaton
is called a pushdown automaton
Transition function
Deterministic: For a given current state and an input symbol, if an automaton
can only jump to one and only one state then it is a deterministic automaton.
Nondeterministic: An automaton that, after reading an input symbol, may jump
into any of a number of states, as licensed by its transition relation. Notice that
the term transition function is replaced by transition relation: The automaton
non-deterministically decides to jump into one of the allowed choices. Such
automata are called nondeterministic automata.
Acceptance condition – for finite automaton
Acceptance of finite words
- Automaton goes through the whole examined word and ends in some final
state.
1.6 NON DETERMINISTIC FINITE AUTOMATA
An NFA has a finite set of states, a finite set of input symbols, one start state
and set of accepting states.
The difference between the DFA and the NFA is in the type of δ
i) For the NFA, δ is a function that takes a state and an input symbol as
arguments, but returns a set of zero, one or more states.
δ (q,a) = {p,q}
Ex: δ (q0,a) = {q1,q2}
ii) For the DFA, δ is a function returns only one
state. δ (q,a) = {ps}
Ex: δ (q0,a) = {q1}
Example
Let a non-deterministic finite automaton be →
Q = {a, b, c}
∑ = {0, 1}
q0 = {a}
16
F = {c}
The transition function δ as shown below −
Present State Next State for Input 0 Next State for Input 1
a a, b b
b c a, c
c b, c c
DFA NDFA
Empty string transitions are not seen NDFA permits empty string
in DFA. transitions.
17
1.6.2 Theorem
Let X = (Qx, ∑, δx, q0, Fx) be an NDFA which accepts the language L(X). We have
to design an equivalent DFA Y = (Qy, ∑, δy, q0, Fy) such that L(Y) = L(X). The
following procedure converts the NDFA to its equivalent DFA −
Algorithm
Input − An NDFA
Output − An equivalent DFA
Step 1 − Create state table from the given NDFA.
Step 2 − Create a blank state table under possible input alphabets for the
equivalent DFA.
Step 3 − Mark the start state of the DFA by q0 (Same as the NDFA).
Step 4 − Find out the combination of States {Q0, Q1,... , Qn} for each possible input
alphabet.
Step 5 − Each time we generate a new DFA state under the input alphabet
columns, we have to apply step 4 again, otherwise go to step 6.
Step 6 − The states which contain any of the final states of the NDFA are the final
states of the equivalent DFA.
Example
Let us consider the NDFA shown in the figure below.
18
q δ(q,0) δ(q,1)
a {a,b,c,d,e} {d,e}
b {c} {e}
c ∅ {b}
d {e} ∅
e ∅ ∅
Using the above algorithm, we find its equivalent DFA. The state table of the DFA is
shown in below.
q δ(q,0) δ(q,1)
[d,e] [e] ∅
[e] ∅ ∅
[c, e] ∅ [b]
[c] ∅ [b]
19
In this NFA with ε, q0 is a start state with input 0; one can either be in state
q0 or in state q1.
If we get at the start a symbol 1, then with ε-move we can change state from
q0 to q1 and then input we can be in state q1.
For example, the string w=002 is accepted by the NFA along the path q0, q0, q0,
q1, q2, q2 with labelled 0, 0, ε, ε, 2.
1.7.2 Definition (or) Formal notation for an ε-NFA
The language accepted by NFA with ε denoted by M = (Q, ∑, δ, q0, F) can be
defined as follows:
Let
M= (Q, ∑, δ, q0, F) be a NFA with ε where, Q is a finite set of states
∑ is input symbols
δ is a transition function for transition from Q X { ∑ ∪ ε } to 2Q q0 is a start state.
F is a set of final states such that F ⊆ Q. The string w is accepted by NFA can be
represented as
L(M) = {w | w ⊆ ∑* and δ transition for w from q0 reaches to F} Example:
Construct NFA with ε which accepts a language consisting the strings of any
number of a’s followed by any number of b’s followed by any number of c’s.
Solution:
20
a b c
Start ε ε
q0 q1 q2
Transition Table:
State input
a b c ε
q0 q0 φ φ q1
q1 φ q1 φ q2
*q2 φ φ q2 φ
The ε-CLOSURE(q0) = { q0, q1, q2}. i.e) the path consisting of q0 along is a
path from q0 to q2 with all arcs labelled ε.
The path from q0 to q1 labelled ε shows that q0 is in ε-CLOSURE(q0) and
path q0 , q1, q2 shows q2 is in ε-CLOSURE(q0).
21
ε-closure definition:
1. Let p be set of states.
2. Then ε-closure(p) = ∪𝑞∈𝑝 ε-closure(q)
3. δ' transition function is defined as
a. δ'(q,ε) = ε-CLOSURE(q)
b. for w in ∑*, and a in ∑
δ'(q,wa) = ε-CLOSURE(p), where p= {p | for some r in δ'(q,w), p in δ(r,a)}
c. δ(R,a) = ∪𝑞 𝑖𝑛 𝑅 δ(q,a)
d. δ’(R,w) = ∪𝑞 𝑖𝑛 𝑅 δ’(R,w)
4. he language accepted by NFA with ε-moves is defined as: L(M) = {w |
δ'(q0,w) contains a state in F}.
For Example
ε ε
2 3 6
ε
1
b
ε
4 5 7
a ε
ε-closure(1) = {1,2,4,3,6}={1,2,3,4,6}
ε-closure(2) = {2,3,6}
ε-closure(5) = {5,7}
Example:
1. Consider the NFA given below and find δ(q0,01).
0 1 2
Start ε ε
q0 q1 q2
Solution:
δ (q0, ε) = ε-closure(q0) = {q0,q1,q2} 1
Thus
22
Then
δ (q0,0) = ε-closure(δ (δ (q0, ε),0))
= ε-closure(δ ({q0,q1,q2},0))
= ε-closure(δ (q0,0)∪ δ (q1,0) )∪ δ (q2,0)
= ε-closure({q0}∪{φ}∪{φ})
= ε-closure({q0})
= {q0, q1, q2} from the equation 1
δ (q0,01) = ε-closure(δ (δ (q0, 0),1))
= ε-closure(δ ({q0,q1,q2},1))
= ε-closure(δ (q0,1)∪ δ (q1,1) )∪ δ (q2,1)
= ε-closure({φ}∪{q1}∪{φ})
= ε-closure({q1})
= {q1, q2}
Since δ (q0,01) = {q1, q2} which contains final state {q2}, it is an accepting string.
23
Example 1:
Convert the following NFA with ε to without ε.
0 1 2
Start ε ε
q0 q1 q2
= ε-closure(δ (ε-closure(q0),0)
24
25
0,1,2 1,2
Q2
26
QUESTION BANK
Part- A
1. Give two strings that are accepted and two strings that are rejected by the following
finite automata M=({q0,q1,q2},{0,1},q0,q1,) [May/June 2016]
2. List any four ways of theorem proving. (Nov’2007)
3. What are deductive proofs?[Nov/Dec 2014]
4. Define proof by contra positive.
5. Define Induction principle. (Apr’2008)
6. Define Alphabets.
7. Define string.
8. What is the need for finite automata?
9. What are finite automata? Give two examples. (May’2009.Apr’2008,May 2007)
10. Define DFA.[May/Jun 2013,Nov/Dec 2016]
11. Explain how DFA process strings.
12. Define transition diagram.[May/Jun 2015]
13. Define transition table.
14. Define the language of NFA. (Nov’2008)
15. Define є- transitions.[May/Jun 2013]
16. Define NFA.[Nov/Dec 2013,Apr/May 18]
17. Define є -NFA.
18. Define є- closure.
27
Part-B & C
1. Construct DFA equivalent to the NFA given below: (10) [May2006,
Nov/Dec 2013,Nov 2016]
28
UNIT-II
𝑖=0
29
L2={00,011,110,1111}
L3={000,0110,1100,11110,1100,11011,11110,111111}
Positive Closure of L
One or More Occurrences
Example 1:
Write a regular expression for the language accepting all combination of a’s over
the set ∑ = {a}.
Solution:
All combinations of a’s means a may be single, double, triple and so on.
There may be the case that ‘a’ is appearing for zero times, which means a null
string. So L={ε,a,aa,aaa,aaaa,...}.
R=a*
Example 2:
Design the regular expression for the language accepting all combinations of a’s
except the null string over ∑={a}.
Solution:
R=a+
30
Example 3:
Design the regular expression for the language containing all the strings containing
any number of a’s and b’s.
Solution:
R = (a+b)*
Example 4:
Construct the regular expression for the language accepting all the strings which
are ending with 00 over the set ∑={0,1}.
Solution:
R = (0+1)*00
Example 5:
Write the regular expression for the language accepting the string which are starting
with 1 and ending with 0 over the set ∑={0,1}.
Solution:
R = 1(0+1)*0
Example 6:
Solution:
I1 : ϕ+R=R
I2 : ϕR =Rϕ = R
I3 : λR = Rλ =R
I4 : λ* = λ
I5 : R+R=R
31
I6 : R*R* = R*
I7 : RR* = R*R
I8 : (R*)* = R*
Deterministic
Finite
NFA with ε-moves
Automata
NFA without
ε- moves
Can be converted Can be converted
32
If r= ε, then
Equivalence of NFA and Regular Expressions:
Theorem:
Let r be a regular expression, then there exists a NFA with ε transition that
accepts L(r).
Proof:
1. Basis : (Zero Operator)
r has zero operators means, r can be either ε or ϕ or ‘a’ for some ‘a’ in
input set ∑.
If r= ε, then
q0
If r= ϕ, then
q0 q1
If r = a, then
a
q0 q1
2. Induction
This theorem can be true for n number of operators. The regular expression contains
equal to or more than one operator. In any type of regular expression there are only three
cases possible.
Union
Concatenation
Closure
Star
Positive
33
M1= (Q2,∑2,δ2,q0,{F2})
The δ is denoted by
δ(q0,ε) = {Q1,Q2}
q1 M1 f1
ε
ε
Start f0
q0
ε
ε
q2 M2 f2
34
Start ε
q1 M1 f1 q2 M2 f2
The initial state q1 by some input a, the next state will be f1.
And on receiving ε, the transitions will be from f1 to q2 and the final state will
be f2.
The transition from q2 to f2 will be on receiving some input b.
Thus L(M) = ab
i.e) a is in L(M1), b is in L(M2).
Hence we prove L(M) = L(M1) . L(M2).
Case 3: Closure Case
Let r= r1*, where r1 be a regular expression.
The machine M1 is such that L(M1) = L(r1).
Then construct M = (Q1∪{q0,f0},∑1,δ,q0,f0).
The mapping function δ is given by
i. δ(q0,ε) = δ(f1,ε) = {q1,f0}
ii. δ(q,a) = δ1(q,a) for q in Q1-{f1} and a is in ∑∪{ε}.
35
Start ε ε
q0 q1 M1 f1 f0
ε
Start
q0 q1 q2 q3
ε a ε
ε
Example 3:
Start 0 1
q0 q1 q2
Star
q0 q1 q2 q3
t ε a ε
36
Example 4:
ε
Star
q0 ε b
q3 q4 q5
ε a ε
Example 5:
Star b ε ε
q q q q q
a
ε
Example 6:
b ε
q1
ε
ε
Star ε ε q9
t q0 q5 q7 q8
a
ε ε ε
q3
Example 7:
0
q2 q3 ε
ε
Start ε ε
q0 q1 q6 q
ε ε
1
q4 q5
ε 37
Steps:
Step 1:
Find the ε-closure of the state q0 from the constructed ε-NFA that is, from state
q0, ε-transition to other states is identified as well as ε-transitions from other states
are identified and combined as one set (new state).
Step 2:
Perform the following steps until there are no more new state has been
constructed.
i. Find the transition of the given regular expression symbols over ∑ from
the new state, that is move(new state, symbol)
ii. Find the ε-closure of move(new state, symbol)
Example:
Construct the DFA for the regular expression (a/b)*ab.
Solution:
a
ε
ε 3 4
Star ε ε a b
t 1 2 7 8 9
ε
5 6
ε b
38
Move (B,b)
Move(C,b) = {6}
39
Equivalent DFA is
Start a b
A B C
a
a
b
b
D
The formula for getting the language r k is given by the regular expression:
ij
rijk = (rikk-1)( rkkk-1)*( rkjk-1)+( rijk-1)
Start 0
q0 q1
Solution:
40
k=0
r (0) = ε
11
r (0) = 0
12
r (0) = ϕ
21
r (0) = ε
22
k=1
=ε
=0
=ϕ
=ε
Now let us compute for final state which denotes the regular expression.
R12(2) will be computed, because there are total 2 states and final state is q2,
whose start state is q1.
= 0. (ε)*. ε + 0
= 0+0
41
Example 2:
Solution:
We will find Rij(0) where K = 0
R11(0)=e because from state q1 to q1 can be achieved only by e transition.
R12(0)=0because from q1 we can reach q2 by ‘0’ input.
R
(0)
11 e
R
(0)
12 0
R
(0)
13 1
R
(0)
21 0
R
(0)
22 e
R 0
(0)
32 +1
42
R
(0)
33 e
• (e+R)* = R*
• R + RS* = RS*
• fR = Rf = f
• f + R = R + f = R (Identity)
Rij(K)=Rij(K-1)+RiK(K-1)(RKK(K-1))*RKj(K-1)
= e + e(e)*e
=e
= 0+e(e)*0
= 0+0 = 0
43
= 1+e(e)*1
= 1+1 = 1
= 0+0(e)*e
= 0+0 = 0
= e + 0(e)*0
R23(1) = R23(0)+R21(0)(R11(0))* R13(0)
= 1+0(e)*1
= 1+ 01
= f + f(e)*e
=f + f
=f
= 0+1 + f(e)*) =0 + 1 + f
= 0+1
44
= e + f(e)*1
=e
Therefore
= 0+(e+00)(e+00)*0
= 0+00(00)*0
= 0(00)*
= (e+00)+(e+00)(e+00)*(e+00)
= (00)*
= (1+01)+(e+00)(e+00)*(1+01)
45
= (e+0)1+(00)*(1+01)
= (e+0)1 + (00)*(e+0)1
= 0*1
= f+(0+1)(e+00)*0
= f + (0+1)(00)*0
= (0+1) + (0+1)(e+00)*(e+00)
= (0+1) + (0+1)(00)*
= e+(0+1)(e+00)*(1+01)
= e+(0+1)(00)*(e+0)1 (because (e+00)*=(00)* and (1+01)=((e+0)1))
46
5. Similarly compute the final state which ultimately gives the regular expression.
Example 1:
Construct regular expression from the given DFA.
47
0 1
Start 1
q1 q2
1
0
0
q3
Solution:
q1 = q10 + q30 + ɛ ------------------ ①
q3 = q20 ------------------------------ ③
substitute ③ to ②
q2 = q2(1+01) + q11
∴ Q + RP = QP*
R P Q
substitute ④ to ①
q1 = q10 + q200 + ɛ
48
Example 2:
b
a
Start b a
q1 q2 q3
Solution:
q1 = q1a + ɛ ---------------------------------------- > ①
q2 = q1b + q2b ------------------------------------- > ②
we can rewrite ① as
q1 = ɛ + q1a ∴ Q + RP = QP*
Q=ɛ
R = q1
P=a
q1 = ɛ a*
= a*
∴ ɛ.R = R
q2 = q1b + q2b
q2 = a*b + q2b
49
a*bb* a*b+
∴ b.b* = b+
50
51
52
54
55
Example
Let us consider the following DFA −
q δ(q,0) δ(q,1)
a b c
b a d
c e f
d e f
*e e f
f f f
Let us apply the above algorithm to the above DFA −
P0 = {(c,d,e), (a,b,f)}
P1 = {(c,d,e), (a,b),(f)}
P2 = {(c,d,e), (a,b),(f)}
Hence, P1 = P2.
There are three states in the reduced DFA. The reduced DFA is as follows −
56
Q δ(q,0) δ(q,1)
QUESTION BANK
Part- A
1. State whether regular languages are closed under intersection and
complementation.[May/June 16,Nov/Dec 17]
2. State the pumping lemma for regular languages.[May/June 2016]
3. What are the two ways representing automata.
4. What are the applications of finite automata?
5. What are the closure properties of regular languages?
6. Show whether a language L={0n12n |n>0} is regular or not using pumping lemma.[May
2017]
7. Draw the transition diagram for an identifier
8. . Construct an NFA for the following regular expression: (a+b)* a+b
9. Explain a finite automaton for the regular expression 0*1*.
10. Illustrate a regular expression for the set of all the strings
11. Compose the difference between the + closure and * closure
12. Illustrate a regular expression for the set of all strings of 0’s
13. Find out the language generated by the regular expression(0+1)*.
14. Tabulate the regular expression for the following
L1=set of strings 0 and 1 ending in 00
57
PART – B & C
58
UNIT III
3.1 CFG
Type 0
Type 1
Type 2
Type 3
Type 0 Grammar:
A phrase structure grammar with no restriction on the production is called as
type 0 grammar.
A language that is generated by type 0grammars is called as the “type 0”
languages.
The production in the type 0 grammar is of the form α→β in which there is no
restriction for the length of α and β.
Here α and β can contain either any number of terminals and non terminals.
For example,
Aac → bBDE
aBD → abcDE
Type 1 Grammar:
This grammar contains the restriction that the production of the α and β, the
length of the β is larger than or equal to the length of the α.
That is, α→β [|α| ≤ |β|]
A language generated by the type 1 grammar is called as type 1 language and
it is “context sensitive grammar”.
The production in the type 1 grammar is of the form α→β where
α is the non terminal
β is terminal followed by any number of non terminals.
Example:
S → aAB B → b
A → aBcb AB → bB
Type 2 Grammar:
A Context-Free Grammar (CFG) is one whose production rules are of the form:
where is any single non-terminal, and is any combination of terminals and non-
terminals. A NFA/DFA cannot recognize strings from this type of language since we must
be able to "remember" information somehow. Instead we use a Push-Down Automaton
which is like a DFA except that we are also allowed to use a stack.
Type 3 Grammar:
A regular language is one which can be represented by a regular grammar, described
using a regular expression, or accepted using an NFA or a DFA.
3.1.3 Formal Definition
A context-free grammar (CFG) consisting of a finite set of grammar rules is a
quadruple (N, T, P, S) where
N is a set of non-terminal symbols.
T is a set of terminals where N ∩ T = NULL.
The derivation or the yield of a parse tree is the final string obtained by concatenating the
labels of the leaves of the tree from left to right, ignoring the Nulls. However, if all the
leaves are Null, derivation is Null.
Example
Let a CFG {N,T,P,S} be
N = {S}, T = {a, b}, Starting symbol = S, P = S → SS | aSb | ε
One derivation from the above CFG is “abaabb”
S → SS → aSbS → abS → abaSb → abaaSbb → abaabb
If a partial derivation tree contains the root S, it is called a sentential form. The above
sub-tree is also in sentential form.
Example 2
Construct the parse tree for the string 00101 for the following CFG using both leftmost
and rightmost derivation. The CFG is
S → A1B
A → 0A | ε
B → 0B | 1B | ε
Solution:
Leftmost derivation Rightmost derivation
Since two different parse trees exist for string w, therefore the given grammar is
ambiguous.
2. Check whether the given grammar is ambiguous or not-
7
S→A/B
A → aAb / ab
B → abB / ∈
Solution Let us consider a string w generated by the given grammar-
w = ab
Now, let us draw parse trees for this string w.
Since two different parse trees exist for string w, therefore the given grammar is
ambiguous.
3. Check whether the given grammar is ambiguous or not-
S → AB / C
A → aAb / ab
B → cBd / cd
C → aCd / aDd
D → bDc / bc
Solution Let us consider a string w generated by the given grammar- w = aabbccdd
Now, let us draw parse trees for this string w.
Since two different parse trees exist for string w, therefore the given grammar is
ambiguous.
3.4 PUSHDOWN AUTOMATA
Pushdown automata differ from finite state machines in two ways:
1. They can use the top of the stack to decide which transition to take.
2. They can manipulate the stack as part of performing a transition.
A pushdown automaton is a way to implement a context-free grammar in a similar way we
design DFA for a regular grammar. A DFA can remember a finite amount of information,
but a PDA can remember an infinite amount of information.
Basically a pushdown automaton is −
"Finite state machine" + "a stack"
A pushdown automaton has three components −
an input tape,
a control unit, and
a stack with infinite size.
The stack head scans the top symbol of the stack.
the input signal and the current state: they have no stack to work with. Pushdown
automata add the stack as a parameter for choice. Pushdown automata can also
manipulate the stack, as part of performing a transition.
Finite state machines choose a new state, the result of following the transition. The
manipulation can be to push a particular symbol to the top of the stack, or to pop off the
top of the stack. The automaton can alternatively ignore the stack, and leave it as it is.
The choice of manipulation (or no manipulation) is determined by the transition table. Put
together: Given an input signal, current state, and stack symbol, the automaton can follow
a transition to another state, and optionally manipulate (push or pop) the stack.
If we allow a finite automaton access to two stacks instead of just one, we obtain a more
powerful device, equivalent in power to a Turing machine. A linear bounded automaton is
a device which is more powerful than a pushdown automaton but less so than a Turing
machine.
Pushdown automata are equivalent to context-free grammars: for every context-free
grammar, there exists a pushdown automaton such that the language generated by the
grammar is identical with the language generated by the automaton, which is easy to
prove. The reverse is true, though harder to prove: for every pushdown automaton there
exists a context-free grammar such that the language generated by the automaton is
identical with the language generated by the grammar.
3.4.1 Formal Definition
A PDA can be formally described as a 7-tuple (Q, ∑, S, δ, q0, I, F) −
Q is the finite number of states
∑ is input alphabet
S is stack symbols
δ is the transition function: Q × (∑ ∪ {ε}) × S × Q × S*
q0 is the initial state (q0 ∈ Q)
I is the initial stack top symbol (I ∈ S)
F is a set of accepting states (F ∈ Q)
The following diagram shows a transition in a PDA from a state q1 to state q2, labeled as
a,b → c
10
This means at state q1, if we encounter an input string ‘a’ and top symbol of the stack
is ‘b’, then we pop ‘b’, push ‘c’ on top of the stack and move to state q2.
3.4.2 Terminologies Related to PDA
3.4.2.1 Instantaneous Description
The instantaneous description (ID) of a PDA is represented by a triplet (q, w, s) where
q is the state
w is unconsumed input
s is the stack contents
Turnstile Notation
The "turnstile" notation is used for connecting pairs of ID's that represent one or many
moves of a PDA. The process of transition is denoted by the turnstile symbol "⊢".
Consider a PDA (Q, ∑, S, δ, q0, I, F). A transition can be mathematically represented by
the following turnstile notation −
(p, aw, Tβ) ⊢ (q, w, αb)
This implies that while taking a transition from state p to state q, the input symbol ‘a’ is
consumed, and the top of the stack ‘T’ is replaced by a new string ‘α’.
Note − If we want zero or more moves of a PDA, we have to use the symbol (⊢*) for it.
3.4.3 Problems on transition diagram of PDA
1. Suppose the PDA P = ({p, q},s {0,1}, {z0, x},δ, q,z0,{p}) has the following transition
functions:
a. (q,0,z0) = {(q, x)}
b. (q,0,x) = {(q, xx)}
c. (q,1,x) = {(q, x)}
d. (q, , x) = {(p,)}
e. (p, , x) = {(p,)}
f. (p, 1, x) = {(p,xx)}
g. (p, 1,z0) = {(p,)}
11
Starting from the initial ID (q, w, z0), show all the reachable instantaneous description
where the input string ‘w’ is as follows:
(i). 01
(ii). 0011
(iii). 010
Solution:
(i) String w=01 ID = (q, w,z0)
(q, 01,z0)├ (q, 1, xz0)
├ (q, , x)
├ (p, )
Now final state p is reached. So the string w=01 is accepted.
(ii) W=0011
ID = (q, w,z0)
(q, 0011,z0)├ (q, 011, xz0)
├ (q, 11,xxz0)
├ (q, 1,xxz0)
├ (q, ,xxz0)
├ (p, ,xz0)
├ (p, )
So the string 0011 is accepted.
(iii) W=010
(q, 010,z0)├ (q, 10,xz0)
├ (q, 0,xz0)
├ (q, ,xxz0)
├ (p, xz0)
├ (p, )
So the string 010 is accepted.
12
2. Consider the following PDA and find whether the string 000111 and 00011 is accepted.
0,z0 / 0z0
0,0 /00
1, 0 /ε
Start 1, 0 /ε ε, z0 /ε
p q r
ε, z0 /ε
Solution:
Transition function:
(p, 0, z0) = (p, 0z0)
(p, 0, 0) = (p, 00)
(p, 1, 0) = (q, )
(p, ,z0) = (r, )
(q, 1, 0) = (q, )
(q, , z0) = (r, )
String w = 000111
(p, 000111, z0) ├(p, 00111, 0z0)
├(p, 0111, 00z0)
├(p,111, 000z0)
├(q, 11, 00z0)
├(q, 1, 0z0)
├(q, , z0)
├ (r,)
So the string 000111 is accepted.
String w = 00011
(p, 000111, z0)├(p, 00011, 0z0)
├(p, 0011, 00z0)
├(p,11, 000z0)
├(q, 1, 00z0)
13
Example
Construct a PDA that accepts L = {0n 1n | n ≥ 0}
Solution
14
Then at state q3, if we encounter input 1 and top is 0, we pop this 0. This may also
iterate. And if we encounter input 1 and top is 0, we pop the top element.
If the special symbol ‘$’ is encountered at top of the stack, it is popped out and it finally
goes to the accepting state q4.
Example
Construct a PDA that accepts L = { wwR | w = (a+b)* }
Solution
Initially we put a special symbol ‘$’ into the empty stack. At state q2, the w is being read.
In state q3, each 0 or 1 is popped when it matches the input. If any other input is given,
the PDA will go to a dead state. When we reach that special symbol ‘$’, we go to the
accepting state q4.
3.6 EQUIVALENCE OF PDA AND CFG
If a grammar G is context-free, we can build an equivalent nondeterministic PDA which
accepts the language that is produced by the context-free grammar G. A parser can be
built for the grammar G.
Also, if P is a pushdown automaton, an equivalent context-free grammar G can be
constructed where
L(G) = L(P)
3.6.1 Algorithm to find PDA corresponding to a given CFG
Input − A CFG, G = (V, T, P, S)
Output − Equivalent PDA, P = (Q, ∑, S, δ, q0, I, F)
Step 1 − Convert the productions of the CFG into GNF.
Step 2 − The PDA will have only one state {q}.
Step 3 − The start symbol of CFG will be the start symbol in the PDA.
15
Step 4 − All non-terminals of the CFG will be the stack symbols of the PDA and all
the terminals of the CFG will be the input symbols of the PDA.
Step 5 − For each production in the form A → aX where a is terminal and A, X are
combination of terminal and non-terminals, make a transition δ (q, a, A).
Problem 1
Construct a PDA from the following CFG.
G = ({S, X}, {a, b}, P, S)
where the productions are −
S → XS | ε , A → aXb | Ab | ab
Solution
Let the equivalent PDA,
P = ({q}, {a, b}, {a, b, X, S}, δ, q, S)
where δ −
δ(q, ε , S) = {(q, XS), (q, ε )}
δ(q, ε , X) = {(q, aXb), (q, Xb), (q, ab)}
δ(q, a, a) = {(q, ε )}
δ(q, 1, 1) = {(q, ε )}
Problem 2
Given G = (V,Σ,R,S), what the PDA will do is effect a leftmost derivation of a string w ∈
L(G). The PDA is
M = ( {0,1}, Σ, V, Δ, 0, {1} )
Namely, there are only two states The PDA moves immediately to the final state
is 1 with the start symbol on the stack and then stays in this state.
These are the transitions in Δ:
1: ( (0,ε,ε), (1,S) )
2: ( (1,ε,A), (1,α) ) for A → α
3: ( (1,σ,σ), (1,ε) ) for σ ∈ Σ
This constructed PDA is inherently non-deterministic; if there is a choice of rules to
apply to a non-terminal, then there is a non-deterministic choice of processing steps.
Graphically the presentation is this:
16
To say that G and M are equivalent means that L(M) = L(G), or, considering an arbitrary
string w ∈ Σ*:
S ⇒* w ⇔ (0,w,ε) ⊢* (1,ε,ε)
Expression Grammar
The most informative examples are those in which there exist the possibility of not using
a leftmost derivation such as our expression grammar:
E→E+T|T
T→T*F|F
F→(E)|a
We can readily match up a leftmost derivation of a + a * a with the corresponding
machine configuration processing as follows:
(0, a + a * a, ε)
⊢ (1, a + a * a, E)
⊢ (1, a + a * a, E + T) E⇒E+T
⊢ (1, a + a * a, T + T) ⇒T+T
⊢ (1, a + a * a, F + T) ⇒F+T
⊢ (1, a + a * a, a + T) ⇒a+T
⊢ (1, + a * a, + T)
⊢ (1, a * a, T)
⊢ (1, a * a, T * F) ⇒a+T*F
⊢ (1, a * a, F * F) ⇒a+F*F
⊢ (1, a * a, a * F) ⇒a+a*F
⊢ (1, * a, * F)
⊢ (1, a, F)
⊢ (1, a, a) ⇒a+a*a
⊢ (1, ε, ε)
3.6.2 Algorithm to find CFG corresponding to a given PDA
Input − A CFG, G = (V, T, P, S)
17
Output − Equivalent PDA, P = (Q, ∑, S, δ, q0, I, F) such that the non- terminals of the
grammar G will be {Xwx | w,x ∈ Q} and the start state will be Aq0,F.
Step 1 − For every w, x, y, z ∈ Q, m ∈ S and a, b ∈ ∑, if δ (w, a, ε) contains (y, m) and
(z, b, m) contains (x, ε), add the production rule Xwx → a Xyzb in grammar G.
Step 2 − For every w, x, y, z ∈ Q, add the production rule Xwx → XwyXyx in grammar G.
Step 3 − For w ∈ Q, add the production rule Xww → ε in grammar G.
Problem 1
The grammar for anbn
Use the grammar: S ⟶ ε | aSb
Here is the PDA:
18
a*b* example
Two common variations on a's followed by b's. When they're equal, no stack bottom is
necessary. When they're unequal, you have to be prepared to recognize that the
stacked a's have been completely matched or not.
a. { anbn : n ≥ 0 }
b. { ambn : 0 ≤ m < n }
Let's look at a few sample runs of (b). The idea is that you cannot enter the final state
with an "a" still on the stack. Once you get to the final state, you can consume
remaining b's and end marker.
19
We can start from state 1 with the stack bottom pushed on:
success: abb
state input stack
1 abb$ c
1 bb$ ac
2 b$ ac
2 $ c
3 ε ε
(fail: ab)
1 ab$ c
1 b$ ac
2 $ ac
Observe that a string like ab also fails due to the inability to consume the very last a.
20
QUESTION BANK
PART- A
1. Define the acceptance of a PDA by empty stack. Is it true that the language accepted
2. by a PDA by empty stack or by that of final state is different languages? [May 2017]
3. What is additional feature PDA has when compared with NFA? Is PDA Superior
4. over NFA in the sense of language acceptance ? Justify your answer.[ Nov/Dec
2013]
5. What are the different ways in which a PDA accepts the language? Define
them.[Apr/May 2018]
6. Is a true that non deterministic PDA is more powerful than that of deterministic PDA?
Justify your answer. [Nov 2016,Apr/May 2018]
7. Explain acceptance of PDA with empty stack.
8. Define instantaneous description of a PDA.
9. Give the formal definition of a PDA.[ May 2011,Nov 2017]
10. Define the languages generated by a PDA using final state of the PDA and empty
stack of that PDA. May’2004, Nov’2006
11. Define CFG.[Nov 2016]
12. Find L(G)where G=({S},{0,1},{S->0S1,S->є},S).
13. Define derivation tree for a CFG (or) Define parse tree.
14. Construct the CFG for generating the language L={anbn /n>=1}. [Nov’2004,Nov/Dec
2013]
15. Find the language generated by the CFG G=({S},{0,1},{S->0/1/ є ,S->0S0/1S1},S).
16. Write the CFG to generate the set {ambncp | m + n=p and p>=1}.
17. Give an example for a context free grammar.
21
PART-B & C
1. Explain in detail about Push down Automata and CFG. (16) [May/Jun 2013, Nov
2017]
2. Consider the GNF CFG G = ({S, T, C, D}, {a, b, c, d}, S, P) where P is:
ScCDDtc/∈ C TD c
TDCcST a DC d
3. Present a pushdown automaton that accepts the language generated by this
grammar.
4. Construct a PDA for the set of palindrome over the alphabet {a, b}.[Apr/May
2017,Apr/May 18]
5. Construct the PDA that accepts the language . [May/Jun 2013, Nov/Dec
17]
6. Construct a PDA that accepts L= { wwR | w = (a+b)* }[May/June 16]
7. Consider the grammar: [Apr/May 2010]
SICtS
S I C t S e S
S a
C b where I, t, and, e stand for if, then, and else, and C and S for
“conditional” and “statement” respectively.
(1) Construct a leftmost derivation for the sentence w = i b t i b t a e a.
(2) Show the corresponding parse tree for the above sentence.
(3) Is the above grammar ambiguous? If sp, prove it.
(4) Remove ambiguity if any and prove that both the grammar produces the
same language.
8. Find a Context-Free Grammar for the following language: L = {anbmck : k = n + m }
[Nov/Dec 2013,Apr/May 2018]
9. When a grammar is said to be ambiguous? Explain with the help of an
example.[May/June 2016, Nov 2016]
22
UNIT IV
The context free grammar is a type 2 grammar which contains the production of
form α → β where | α | ≤ | β |.
CFG has single non terminal on the left side of the productions and any number of
terminals and non terminals on the right side.
Example:
CFG are as follows:
A→b
S → ABC
S → aAB
The normal forms of CFG are as follows:
1. Chomsky Normal Form (CNF)
2. Greibach Normal Form (GNF)
4.1.1 Simplification of CFG
To make CFG in a Chomsky Normal Form, the following simplifications are to be made.
1. Eliminate useless symbols.
2. Eliminate ε productions.
3. Eliminate unit productions.
4.1.1.1 Eliminating useless symbols
1. Useless symbols of the grammar are those non terminals or terminals that do not
appear in any derivation of a terminal string from the starting symbol.
2. Useful symbols is a variable or non terminal in the grammar G = (V, T, P, S) if there
is a derivation of the form s=> α
Where,
S → start symbol w → string
X → may be in V or T.
There are two strings to identify for eliminating the useless symbols and they are
23
Generating Symbols
Reachable Symbols
Generating Symbols
The symbol X in the grammar is generating if X => w for some terminal string ‘w’.
Every terminal is always generating since it derives itself in zero step.
Example:
X => w
S c
Xd
Reachable symbols
The symbol X is reachable if there is a derivation of the form S => αXβ for some α and
β in V or T of the grammar G.
S → XY | C X → a
Y → d D → 0D
If we want to eliminate the useless symbols, then we have to remove all the symbols
that are not generating and that are not reachable.
Example
Remove the useless symbols in the given CFG.
S → AB | CA
A→a
B → BC | AB
C → aB | b
Solution:
First find all the productions which are giving terminal symbols.
A → a and C → b
Now consider the start symbol.
S →AB | CA S → AB
S → CA
Let us take A and B, S → aB
S → aBC →aBBC → aABBC → aaBBBC → aaBCBBC →….and so on.
So we eliminate B.
After eliminating B we get S → CA A→a C→b
24
25
S → YX
Take Y → ε
S → XX
Take X → ε and Y → ε
S→X
S → Y, when both X are replaced by ε.
After eliminating ε production at S productions, we write
S → XY | YX | XX | X | Y Now consider,
X → 0X | ε X → 0
Y → 1Y | ε Y → 1
We write X and Y production rule as X → 0X | 0
Y → 1Y | 1
Finally,
S XY | YX | XX | X | Y
X 0X | 0
Y 1Y | 1
4.1.1.3 Eliminating Unit productions
1. The productions of the form A B, where A,B V called unit production.
2. Eliminate useless symbols and - productions.
3. Discover those pairs of variables (A,B) such that A B.
Because there are no - productions, this derivation can only use unit
productions.
Thus, we can find the pairs by computing reachablity in a graph where
nodes = variables, and arcs = unit productions.
4. Replace each combination where A B α and α is other than a single
variable by A α.
i.e., “short circuit" sequences of unit productions, which must eventually
be followed by some other kind of production.
Remove all unit productions.
Consider the grammar G is S A, A B, B C, C d.
Here A,B,C are the unit variables of length one. Then the resultant grammar is S d. This is
called the chain rule.
26
Example
Eliminate unit productions from the following grammar.
S A | bb
A B |b
BS|a
Solution:
In the given grammar, the unit productions are S A, A B and B S.
S A gives S b.
S A B gives S B gives S a.
A B gives A a
A B S gives A S gives A bb.
B S gives B bb.
B S A gives B A gives B b.
The new productions are
S bb | b | a
A b | a | bb
B a | bb | b
It has no unit productions. In order to get the reduced CFG, we have to eliminate the useless
symbols.
From the above grammar we can eliminate the A and B productions.
Then the resultant grammar is S bb | b | a.
4.1.2 Normal Forms
The grammar can be simplified by reducing the ε production, removing useless
symbols, unit production. There is also a need to have grammar in some specific
form.
In CFG at the right hand of the production there are any numbers of terminal or non
terminal symbols in any combination.
To normalize such a grammar, that means, we want the grammar in some specific
format.
There are two important normal forms such as,
Chomsky’s Normal Form (CNF)
Greibach Normal Form (GNF)
27
28
S → AAAAS
S → AAAA
Let us take,
S → A, and AAAS can be replaced by P1.
If we define
P1 → AAAS, then the rule becomes S → AP1
Here P1 is not a CNF, so convert P1 as follows,
P1 → A, and AAS can be replaced by P2 as
P2 → AP2 is in CNF form.
Here again P2 is not in CNF form, so P2 can be replaced by P3 as
P2 → A, and AS can be replaced by P3.
P2 → AP3
P3 → AS which is in CNF. So the grammar is rewritten as
S → AP1
P1 → AP2
P2 → AP3
P3 → AS,
for the rule 1.
Consider the rule 2,
S → AAAA S → AA AA
P4 P5
P4 → AA
P5 → AA
Both P4 and P5 are same. So it can be written as
S → P4.P5
S → P4 P4,
since P4 = P5.
Finally the CNF can be written as,
S → AP1
P1 → AP2
P2 → AP3
P3 → AS
29
S → P4P4
P4 → AA
A→a
4.1.2.2 Greibach Normal form
The rule for GNF is
Non terminal → one terminal. Any number of non terminals
NT T. NT
Example:
S → aA in GNF
S→a
S → AA not in GNF
S → Aa
We can use two important lemmas based on which it is easy to convert the given CFG to
GNF.
Lemma 1:
G = (V, T, P, S) be a given CFG and if there is a production A → Ba and
B → β1 | β2 | … |βn Then we can convert the A rule as
A → β1a | β2a | β3a | …|βna
For example,
S → Aa
A → aA | bA | aAS | b
Then we can convert S rule in GNF as,
S aAa | bAa | aASa | ba
A aA | bA | aAS | b
Lemma 2:
Let G = (V, T, P, S) be a given CFG and if there is a production A → Aa1 | Aa2 | Aa3 | …|
Aan | β1 | β2 | …| βn
Such that βi do not start with A, then equivalent grammar in CNF can be A → β1 | β2 |…| βn
A → β1Z | β2Z | …| βnZ Z → a1 | a2 | …| an
Z → a1Z | a2Z | …| anZ
For example,
Consider A → A1| 0B | 2
30
Here
A → 0B | 2
A → 0BZ | 2Z
Z → 1 Z → 1Z
Problems:
1. Convert the following grammar to GNF. S abSb
S aa
Solution:
NT → t. NT
Let us take first production
i) S → abSb
Create another NT namely B, so B → a
Substitute in s rule,
S → aBSB is in GNF Similarly,
ii) S → aa
Create new NT namely A A → a
So,
Finally
S → aA
S aBSB
2. Convert the given CFG to GNF
S → ABA
A → aA | ε
B → bB | ε
Solution:
Eliminate ε-production from the given production S → ABA | AB | AA | BA | A | B
A → aA | a B → bB | b
Remove the unit production
S → ABA | AB | AA | A
S → aABA | aAB | aAA | aA | aBA | aB | aA | a
S → aABA | aAB | aAA | aBA | aA | aB | a
S → BA | B
31
S → bBA | bB | bA | b
Finally,
S → aABA | aAB | aAA | aBA | aA | aB | a
S → bBA | bB | bA | b
A → aA | a
B → bB | b
3. Convert the following CFG into CNF
S → ASA | aB,
A → B | S,
B→b|∈
Solution
(1) Since S appears in R.H.S, we add a new state S0 and S0 → S is added to the
production set and it becomes −
S0 → S,
S → ASA | aB,
A → B | S,
B→b|ε
(2) Now we will remove the null productions −
B → ε and A → ε
After removing B → ε, the production set becomes −
S0 → S,
S → ASA | aB | a,
A → B | S | ε,
B→b
After removing A → ε, the production set becomes −
S0 → S,
S → ASA | aB | a | AS | SA | S,
A → B | S,
B→b
(3) Now we will remove the unit productions.
After removing S → S, the production set becomes −
S0 → S,
32
S → ASA | aB | a | AS | SA,
A → B | S,
B→b
After removing S0 → S, the production set becomes −
S0 → ASA | aB | a | AS | SA,
S → ASA | aB | a | AS | SA
A → B | S,
B→b
After removing A → B, the production set becomes −
S0 → ASA | aB | a | AS | SA,
S → ASA | aB | a | AS | SA
A→S|b
B→b
After removing A → S, the production set becomes −
S0 → ASA | aB | a | AS | SA,
S → ASA | aB | a | AS | SA
A → b |ASA | aB | a | AS | SA,
B→b
(4) Now we will find out more than two variables in the R.H.S
Here, S0 → ASA, S → ASA, A → ASA violates two Non-terminals in R.H.S.
Hence we will apply step 4 and step 5 to get the following final production set which is in
CNF
S0 → AX | aB | a | AS | SA
S → AX | aB | a | AS | SA
A → b |AX | aB | a | AS | SA
B→b
X → SA
(5) We have to change the productions
S0 → aB,
S → aB,
A → aB
And the final production set becomes −
33
S0 → AX | YB | a | AS | SA
S → AX | YB | a | AS | SA
A → b |AX | YB | a | AS | SA
B→b
X → SA
Y→a
34
and thus the leaf level must also have 2k nodes because the productions used to get the
leaves are of the form A a. Thus, |w| ≤ 2k also.
Now, suppose the grammar has k variables i.e. |V| = k. Let w be a string such that S
* w and |w| 2k. This means that every parse tree for w has height at least k + 1.
Suppose we follow a path from the root labeled S to a leaf on level k + 1. There are k + 2
nodes along the path a path always has one more node than edges), and k+1 of those are
labeled by variables. Thus, two nodes along the path must be labeled by the same
variable. This means we have a situation that looks like this:
Here’s what this means: starting from a leaf and working upward toward the root we
can identify two vertices Ai and Aj on the path which are labeled by the same variable i.e. Ai
= Aj. Looking at the corresponding sentential forms in a derivation what we really have is
this: S * uAiz. Going from Ai to Aj we have Ai * vAiy. Thus, the derivation of w could
proceed like this: S * uAiz * uvAiyz * uvvAiyyz …uviAiyiz *uvixyiz What this
indicates is that we can repeat the subtree rooted at the first A i as often as we'd like until
eventually we have Ai * x which stops the derivation.
Thus, if our original string w is “long enough”, it can be broken up into substrings u, v, x,
y, and z, two of which (v and y) can be repeated as often as we’d like. That is, w = uvxyz
and for every i ≥ 0, uvixyiz L. When using a proof by contradiction to show that a
language is not context-free, we assume that the opposite is true i.e. assume L is context-
free. Then, by Theorem 8.1 there is some integer m so that if a string with length greater
than m is chosen then that string should “pump”. In the notation of the theorem it must be
35
true that |vxy| ≤ m and |vy| ≥ 1 (in other words, at least one of v, y is nonempty). In order
to obtain our contradiction, we need to consider all possible “locations” of vxy and, in each
case, find a value of i such that when we pump i times the string we end up with is not in L.
As with the pumping lemma for regular languages we use the notation wi = uvixyiz and thus
w0 = uxz.
Here are some examples of how the pumping lemma for context-free languages is used.
When you find a value of i that gives you a contradiction you must explicitly state what that
contradiction is. You may not eliminate cases by saying that one case substantially the
same as another. Each case should be explicitly considered. There is usually more than
one way to define the cases to be used so the most important thing is to make sure every
possible place that vxy could appear is considered.
Example:
Using pumping lemma prove that the language (anbncn |n=1) is not context free.
The given language L={anbncn}
Let z be any string that belongs to L
Let z = aPbPcP€L
According to pumping lemma, if z is in L and |z|>n, z can be written as
z=uvwxy
z=aPbPcP as
u, vwx and y respectively, we get
u=aP
vwx=bP where |vwx|=n
vx=bP-m where |vx|=1
P
y=c
Substituting these values in uv1wx1y
=uvi-1vwx xi-1y (uviwxiy is expressed in this form)
=uvwx(vx)i-1y
=aPbP(bP-m)i-1cP
=aPbPbP for all values of i
Let i=0.
uvi-1vwx xi-1 y = aPbPbP(0)-m(0)-P+micP
36
=aPbPbm-PcP
=aPbmcP€L
Hence L is not a context free grammar.
Context-free languages are closed under several common operations.
4.3 CLOSURE PROPERTIES OF CFL
1.Union
Suppose we have grammars for two languages, with start symbols S and T. Rename
variables as needed to ensure that the two grammars don't share any variables. Then
construct a grammar for the union of the languages, with start symbol Z, by taking all the
rules from both grammars and adding a new rule Z -> S | T.
2.Concatenation
Suppose we have grammars for two languages, with start symbols S and T. Rename
variables as needed to ensure that the two grammars don't share any variables. Then
construct a grammar for the union of the languages, with start symbol Z, by taking all the
rules from both grammars and adding a new rule Z -> ST.
3.Star
Suppose that we have a grammar for the language L, with start symbol S. The grammar
for L*, with start symbol T, contains all the rules from the original grammar plus the rule T ->
TS | e.
4.String reversal
Reverse the character string on the right hand side of every rule in the grammar.
5.Homomorphism
Suppose that we have a grammar G for language L and a homomorphism h. To
construct a grammar for h(L), modify the right hand side of every rule in G to replace each
terminal symbol t with its image h(t) under the homomorphism.
6.Intersection with a regular language
The intersection of a context-free language and a regular language is always context-
free. To show this, assume we have a PDA M accepting the context-free language and a
DFA N accepting the regular language. Use the product construction to create a PDA which
simulates both machines in parallel. This works because only M needs to manipulate the
stack; N never touches the stack.
37
Non-closure facts
Context-free languages are not closed under set intersection or set complement.
1.Intersection
Consider the languages L1 and L2 defined by L1 = {anbncj: n,j = 0} and L2 = {ajbncn: n,j =
0}. They are both context-free. However, their intersection is the language L = {anbncn: n =
0}. We used the pumping lemma to show that L is not context-free.
2.Set complement
There are two approaches to showing this. First, you can use deMorgan's laws to show that
closure under set complement plus closure under union would imply closure under
intersection.
4.4 TURING MACHINES
A Turing Machine is an accepting device which accepts the languages (recursively
enumerable set) generated by type 0 grammars. It was invented in 1936 by Alan Turing.
4.4.1 Definition
A Turing Machine (TM) is a mathematical model which consists of an infinite length tape
divided into cells on which input is given. It consists of a head which reads the input tape. A
state register stores the state of the Turing machine. After reading an input symbol, it is
replaced with another symbol, its internal state is changed, and it moves from one cell to
the right or left. If the TM reaches the final state, the input string is accepted, otherwise
rejected.
A TM can be formally described as a 7-tuple (Q, X, ∑, δ, q0, B, F) where −
Q is a finite set of states
X is the tape alphabet
∑ is the input alphabet
δ is a transition function; δ : Q × X → Q × X × {Left_shift, Right_shift}.
q0 is the initial state
B is the blank symbol
F is the set of final states
Example of Turing machine
Turing machine M = (Q, X, ∑, δ, q0, B, F) with
Q = {q0, q1, q2, qf}
X = {a, b}
38
∑ = {1}
q0 = {q0}
B = blank symbol
F = {qf }
δ is given by −
Here the transition 1Rq1 implies that the write symbol is 1, the tape moves right, and the
next state is q1. Similarly, the transition 1Lq2 implies that the write symbol is 1, the tape
moves left, and the next state is q2.
4.4.2 Time and Space Complexity of a Turing Machine
For a Turing machine, the time complexity refers to the measure of the number of times the
tape moves when the machine is initialized for some input symbols and the space
complexity is the number of cells of the tape written.
Time complexity all reasonable functions −
T(n) = O(n log n)
TM's space complexity −
S(n) = O(n)
4.4.3 Acceptance of Languages
A TM accepts a language if it enters into a final state for any input string
w. A language is recursively enumerable (generated by Type-0 grammar) if it
is accepted by a Turing machine.
There may be some cases where a TM does not stop. Such TM accepts the
language, but it does not decide it.
39
The basic guidelines of designing a Turing machine have been explained below with the
help of a couple of examples.
Example 1
Design a TM to recognize all strings consisting of an odd number of α’s.
Solution
From the above moves, we can see that M enters the state q1 if it scans an even
number of α’s, and it enters the state q2 if it scans an odd number of α’s. Hence q2 is
the only accepting state.
Hence,
where δ is given by −
α BRq2 BRq1
Example 2
Design a Turing Machine that reads a string representing a binary number and erases all
leading 0’s in the string. However, if the string comprises of only 0’s, it keeps one 0.
Solution
Let us assume that the input string is terminated by a blank symbol, B, at each end of the
string.
40
If M is in q0, on reading 0, it moves right, enters the state q1 and erases 0. On reading
1, it enters the state q2 and moves right.
If M is in q1, on reading 0, it moves right and erases 0, i.e., it replaces 0’s by B’s. On
reaching the leftmost 1, it enters q2 and moves right. If it reaches B, i.e., the string
comprises of only 0’s, it moves left and enters the state q3.
If M is in q3, it replaces B by 0, moves left and reaches the final state qf.
Hence,
M = {{q0, q1, q2, q3, q4, qf}, {0,1, B}, {1, B}, δ, q0, B, {qf}}
where δ is given by −
Example 2
Design a Turing Machine which reverses the given string {abb}
41
Example 4
3. Construct a Turing Machine ™ to move an input string over the alphabet A = {a} to the
right one cell. Assume that the tape head starts somewhere on a blank cell to the left of the
input string. All other cells are blank, labeled by ^. The machine must move the entire string
to the right one cell, leaving all remaining cells blank.
Solution:
The format of the string in the tape is:
λ λ a a a λ λ λ …
↑
R/W head
The required transition rules are:
d(q0, l) = (q1, l, R)
d(q1, a) = (q2, l, R)
d(q2, a) = (q2, a, R)
d(q2, l) = (q3, a, R)
where q3 € F
Example 5
Design a Turning Machine to recognize each of the following languages.
(i) {0 n1n | n ≥1}
(ii) {wwR | w € ( 0 +1)*}
(i) {0n1n|n≥1}
Solution:
Initially the tape of M contains 0n1n followed by infinity of blanks. Starting at leftmost 0, we
check it off by replacing it with some other symbol, say x. we then let the read-write head
travel right to find the leftmost ‘1’, which in turn is checked off by replacing it with another
symbol, say y. After that, we go to left again to the leftmost 0, replace it with an x, then
move to the leftmost ‘1’, and replace it with y, and so on.
Travelling back and forth the same way we match each 0 with a corresponding 1. If after
some time no 0’s and 1’s remain, then the string must be in L.
Working on this, the solution is
Q={qc,q1,q2,q3,q4}
F={q4}
42
∑={0,1}
G={0, 1, x, y, B}
43
Multi-tape Turing Machines have multiple tapes where each tape is accessed with a
separate head. Each head can move independently of the other heads. Initially the input
is on tape 1 and others are blank. At first, the first tape is occupied by the input and the
other tapes are kept blank. Next, the machine reads consecutive symbols under its
heads and the TM prints a symbol on each tape and moves its heads.
44
Note − Every Multi-tape Turing machine has an equivalent single-tape Turing machine.
4.5 Programming Techniques for Turing Machine Construction
Designing Turing machines by writing out a complete set of states and a next – move
function is a noticeably unrewarding task. In order to describe complicated Turing machine
constructions we need some “higher – level” conceptual tools.
Assume input integers m and n are put on the input tape separated by a 1 as 0 m10n
(two unary numbers using 0’s separated by a special symbol 1).
The TM is M = ({q0, q1, …, q6}, {0, 1}, {0, 1, B}, , q0, B).
No final state is needed.
M conducts the following computation steps:
1. find its leftmost 0 and replaces it by a blank;
2. move right, and look for a 1;
3. after finding a 1, move right continuously
4. after finding a 0, replace it by a 1;
5. move left until finding a blank, & then move one cell to the right to get a
0;
6. repeat the above process.
The transition table of M is as shown in Table 8.2.
The transition table for the TM
symbol
state 0 1 B
q0 (q1, B, R) (q5, B, R) -
q1 (q1, 0, R) (q2, 1, R) -
q2 (q3, 1, L) (q2, 1, R) (q4, B, L)
q3 (q3, 0, L) (q3, 1, L) (q0, B, R)
q4 (q4, 0, L) (q4, B, L) (q6, 0, R)
q5 (q5, B, R) (q5, B, R) (q6, B, R)
q6 - - -
45
Moves to compute 2 1 = 1:
_
Moves to compute 1 2 = 0:
_
In the figure we see that the finite control not only consist of a state q but also of three
data elements A, B and C. The technique requires no extension to the TM model. We
merely think of the state as the tuple [q,A,B,C]. Regarding states this way allows to
describe transitions in a more systematic way and often to simplify the strategy of the
program.
To recognize the language (01*)(10*)
46
track. This technique does not extend the model of the TM. It is simply a way to view tape
symbols and to imagine that they have a useful structure. A common use of multiple track
is to use one track to mark cells and the second track as the data.
4.5.4 Subroutines
As with programs, a “modular” or “top-down” design is facilitated if we use subroutines
to define elementary processes. A Turing machine can simulate any type of subroutine
found in programming languages, including recursive procedures and any of the known
parameter-passing mechanisms. The general idea is to write part of a TM program to serve
as a subroutine; it will have a designed initial state and a designated return state which has
no move and which will be used to affect a return to the calling routine. To design a TM that
“calls” the subroutine, a new set of states for the subroutine is made, and a move from the
return is affected by the move from the return state.
48
QUESTION BANK
PART-A (2 Marks)
1. What do you mean by null production and unit production.[May/June 16]
2. State whether regular languages are closed under intersection and
complementation.[May/June 16,Nov/Dec 17]
3. How to simplify the context free grammar?
4. What is Useless symbol? How to remove that?
5. Eliminate the useless symbol for the following productions
6. What are є-Productions? How to eliminate the є-Productions?
7. Eliminate the null production for the following grammar.
8. What are unit productions? How to eliminate the unit production?
9. Eliminate the unit production from the grammar below:
10. Define Chomsky normal form. [Nov/Dec 2016,Apr/May 2018]
11. Define greibach normal form. [May/Jun 2013]
12. State Pumping lemma for Context free language. [Dec 08]
13. Can you say the language generated by a CFG in CNF is finite or infinite? If so
14. how? If not, why?
15. What is a Turing machine?[May/June 2016,Nov/Dec 2017,Apr/May 2018]
16. What are the special features of TM? [Nov 2016]
17. Define Turing machine. [Nov 2016]
18. Define Instantaneous description of TM.
19. What are the applications of TM?
20. Define a move in TM.
21. What is the language accepted by TM?
22. What are the techniques for Turing machine construction? [Nov/Dec 2013]
23. What is a multi-tape Turing machine?
49
PART – B & C
1. Convert the following grammar into CNF. [Nov/Dec 2013]
S aX | Yb
XS|
Y bY | b
2. Convert the following grammar G into Greibach Normal Form (GNF).[May
2017,Nov/Dec 2017]
S XA|BB
B b|SB
Xb
Aa
3. Convert the following grammar into an equivalent one with no unit productions and no
useless symbols. Convert to Chomsky Normal Form (CNF).[Apr/May 2010]
S→A CB
A→C D
B→1B 1
C→0C 0
D→2D 2
4. What is the purpose of normalization? Construct the CNF and GNF for the following
grammar and explain steps
5. Design a Turing Machine which reverses the given string {abb} [Nov/Dec 2012,May
2017]
6. Construct a Turing Machine ™ to move an input string over the alphabet A = {a} to the
right one cell. Assume that the tape head starts somewhere on a blank cell to the left
of the input string. All other cells are blank, labeled by ^. The machine must move the
entire string to the right one cell, leaving all remaining cells blank.[Apr/May 2010]
7. Design a Turning Machine to recognize each of the following languages.
(i) {0 n1n | n ≥1}
(ii) {wwR | w € ( 0 +1)*}[Apr/May 2009,Apr/May 2018]
8. Design Turing Machine for the function f(n)=n2 [Nov/Dec 2009]
9. Explain the variations of Turing Machine.[May/June 2016,May 2017]
50
10. Construct a Turing machine to accept palindromes in an alphabet set ∑= {0,1}. Trace
the string ‘0101’ and “1001” [May/June 2016]
11. Explain in detail about Programming Techniques for Turing Machine.[Nov/Dec 2017,
Apr/May 2018]
51
UNIT –V
UNDECIDABILITY
Example
The halting problem of Turing machine
52
53
L1 says n no. of a’s followed by n no. of b’s followed by n no. of c’s and then any no. of
d’s. L2 says any no. of a’s followed by n no. of b’s followed by n no. of c’s followed by n
no. of d’s. Their intersection says n no. of a’s followed by n no. of b’s followed by n no.
of c’s followed by n no. of d’s. So it can be decided by turing machine, hence recursive.
Similarly, complement of recursive language L1 which is ∑*-L1, will also be recursive.
5.2 UNDECIDABLE PROBLEM WITH RE
5.2.1 Halting Problem
Input − A Turing machine and an input string w.
Problem − Does the Turing machine finish computing of the string w in a finite number
of steps? The answer must be either yes or no.
Proof − At first, we will assume that such a Turing machine exists to solve this problem
and then we will show it is contradicting itself. We will call this Turing machine as
a Halting machine that produces a ‘yes’ or ‘no’ in a finite amount of time. If the halting
machine finishes in a finite amount of time, the output comes as ‘yes’, otherwise as
‘no’. The following is the block diagram of a Halting machine −
54
55
x1 x2 x3
M Abb aa aaa
N Bba aaa aa
56
Here,
x2x1x3 = ‘aaabbaaa’
and y2y1y3 = ‘aaabbaaa’
We can see that
x2x1x3 = y2y1y3
Hence, the solution is i = 2, j = 1, and k = 3.
Example 2
Find whether the lists M = (ab, bab, bbaaa) and N = (a, ba, bab) have a Post
Correspondence Solution?
Solution
x1 x2 x3
M ab bab bbaaa
N a ba bab
57
Although the tape has infinitely many cells, only some finite prefix of these will be
non-blank. We write these down as part of our state. To describe the state of the finite
control, we create new symbols, labelled q1 through qk, for each of the finite state
machine’sk states. We insert the correct symbol into the string describing the tape’s
contents at the position of the tape head, thereby indicating both the tape head’s
position and the current state of the finite control. For the alphabet {0,1}, a typical state
might look something like:
101101110q700110.
A simple computation history would then look something like this:
q0101#1q401#11q21#1q810.
We start out with this block, where x is the input string and q0 is the start state:
q0x#
The top starts out “lagging” the bottom by one state, and keeps this lag until the
very end stage. Next, for each symbol a in the tape alphabet, as well as #, we have a
“copy” block, which copies it unmodified from one state to the next:
A
A
We also have a block for each position transition the machine can make,
showing how the tape head moves, how the finite state changes, and what happens to
the surrounding symbols. For example, here the tape head is over a 0 in state 4, and
then writes a 1 and moves right, changing to state 7:
q40 1q7
Finally, when the top reaches an accepting state, the bottom needs a chance to
finally catch up to complete the match. To allow this, we extend the computation so that
once an accepting state is reached, each subsequent machine step will cause a symbol
near the tape head to vanish, one at a time, until none remain. If qf is an accepting
state, we can represent this with the following transition blocks, where a is a tape
alphabet symbol:
58
qfa
aqf
q
f
q
f
There are a number of details to work out, such as dealing with boundaries between
states, making sure that our initial tile goes first in the match, and so on, but this shows
the general idea of how a static tile puzzle can simulate a Turing machine computation.
5.5 NP PROBLEMS
In computational complexity theory, NP is one of the most fundamental
complexity classes. The abbreviation NP refers to “nondeterministic polynomial time.”
Intuitively, NP is the set of all decision problems for which the instances where the
answer is “yes” have efficiently verifiable proofs of the fact that the answer is indeed
“yes”. More precisely, these proofs have to be verifiable in polynomial time by a
deterministic Turing machine. In an equivalent formal definition, NP is the set of
decision problems where the “yes”-instances can be accepted in polynomial time by a
non-deterministic Turing machine. The equivalence of the two definitions follows from
the fact that an algorithm on such a non-deterministic machine consists of two phases,
the first of which consists of a guess about the solution, which is generated in a non-
deterministic way, while the second consists of a deterministic algorithm that verifies or
rejects the guess as a valid solution to the problem. The complexity class P is contained
in NP, but NP contains many important problems, the hardest of which are called NP-
complete problems, whose solutions are sufficient to deal with any other NP problem in
polynomial time. The most important open question in complexity theory, the P NP
problem, asks whether polynomial time algorithms actually exist for NP-complete, and
by corollary, all NP problems. It is widely believed that this is not the case.
5.5.1 Formal definition
The complexity class NP can be defined in terms of NTIME as follows:
59
60
The model consists of an input output relation that the machine computes. The
input is given in binary form on the machine's tape, and the output consists of the
contents of the tape when the machine halts. What determines how the contents of the
tape change is a finite state machine (or FSM, also called a finite automaton) inside the
Turing Machine. The FSM is determined by the number of states it has, and the
transitions between them.
At every step, the current state and the character read on the tape determine the
next state the FSM will be in, the character that the machine will output on the tape
(possibly the one read, leaving the contents unchanged), and which direction the head
moves in, left or right.
The problem with Turing Machines is that a different one must be constructed for
every new computation to be performed, for every input output relation.
This is why we introduce the notion of a universal turing machine (UTM), which along
with the input on the tape, takes in the description of a machine M. The UTM can go on
then to simulate M on the rest of the contents of the input tape. A universal turing
machine can thus simulate any other machine.
61
begins at the position on tape 2 scanned by U. This transition 0 i10j10k10l10m is the one
M would next make.
we should:
(a) Change the contents of tape 3 to 0k (next state). To do so, U first erases all the 0’s
62
Finite control of U
Organization of the Universal Turing Machine U as a Multitape TM.
63
64
QUESTION BANK
PART A
1. What is post correspondence problem?
2. What is Polynomial Time?
3. Discuss to write the Class P.[May/June 2016]
4. Discuss the Class NP.
5. What is the Time Complexity of the Class NP of NDTM?
6. Discuss to write the definition of Class NP- Completeness.[May 2017]
7. What is P Versus NP Problem?
8. Write the definition of Class NP- Completeness.
9. What is NP-Hard and NP-Complete?
10. Write the properties of NP-Hard and NP-Complete.[Nov 2016]
11. What is NP- Complete? Write its properties.
12. What is Undecidable theory?
13. Differentiate between recursive and recursively enumerable
languages.[May/June 2016, Nov/Dec 2017]
14. What do you mean by Universal Turing Machine?[Apr/May 2018]
PART – B & C
65