Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

Unit-III

Context free Grammar and Languages:

Context free grammar:

Introduction to Grammar:

Grammar is a set of rules used to define the language.

Types of grammar:

1. Type 0 – Recursive Grammar


2. Type 1 – Context Sensitive Grammar
3. Type 2 – Context free Grammar
4. Type 3 – Regular Grammar

Type 0

Type 1

Type 2

Type 3

Type 0 Grammar:

 A phrase structure grammar with no restriction on the production is called as type


0 grammar.
 A language that is generated by type 0grammars is called as the “type 0”
languages.
 The production in the type 0 grammar is of the form α→β in which there is no
restriction for the length of α and β.
 Here α and β can contain either any number of terminals and non terminals.
 For example,
o Aac → bBDE
o aBD → abcDE

Type 1 Grammar:

This grammar contains the restriction that the production of the α and β, the length
of the β is larger than or equal to the length of the α.

That is, α→β [|α| ≤ |β|]

 A language generated by the type 1 grammar is called as type 1 language and it is


“context sensitive grammar”.
 The production in the type 1 grammar is of the form α→β where
o α is the non terminal
o β is terminal followed by any number of non terminals.

Example:

S → aAB

B→b

A → aBcb

AB → bB

Type 2 Grammar:

 This grammar contains the production of the form α→β, |α| ≤ |β|.
 In this grammar, the left side of the production..............missing........

Context free grammar for palindrome:

 A palindrome is a string that reads the same forward and backward.


 That is if ‘w’ is a string, then it is palindrome if and only if w=wR.
 It is easy to verify that the language Lpal of palindromes of 0’s and 1’s is not a
regular language.
 To prove this, we can use pumping lemma and consider the string w=0n10n.
 Now we assume that the string ‘w’ is regular and we break the string w=xyz and
then we lead to a contradiction which has been solved in pumping lemma.
 When the first and last symbols are removed from the palindrome, then the
resulting string is also a palindrome.

Basis: ε, 0, 1 are palindromes.


Induction:

If w is a palindrome, then 0w0, 1w1 is also palindrome.

 The context free grammar productions for the palindrome is as follows:


P→ε
P→0
P→1
P → 0p0
P → 1p1
 The grammar Gpal for the palindrome is represented by Gpal = ({P},{0,1},A,P)

Where P → 1P1 | 0P0 | 1 | 0 | ε.

A context free grammar (CFG) for simple expressions:

1. E → I
2. E → E + E
3. E → E * E
4. E → (E)
5. I → a
6. I → b
7. I → Ia
8. I → Ib
9. I → I0
10. I → I1

Here the grammar for expression is stated formally as G = ({E,I}, T, P, E), where T is the
set of symbols {+, *, (, ), a, b, 0, 1} and P is the set of productions, E is the starting NT
(Non terminal).

Problems for generating CFG:

Generate CFG for the following Language.

1. L = { 0n | n=1}
S → 0A | 0
A→ε
(or)
S→A
A→0
2. L = {0n | n ≥ 1}
S → 0A | 0
A → 0A | ε
3. L = {0*}
S → 0A | ε
A → 0A | ε
n n
4. L = {0 1 | n = 1}
S → AB
A → 0A | ε
B → 1B | ε
5. L = {0 1 | n ≥ 1}
n n

S → AB
A → 0A | 0 | ε
B → 1B | ε
6. L = {0n10n | n = 1}
S → A1A
A → 0A | ε
7. L = {0 10n | n ≥ 1}
n

S → A1A
A → 0A | 0 | ε
8. L = {a b c | i ≠ j or j ≠ k}
i j k

S → ABC
A → aA | a | ε
B → Bb | b | ε
C → cC | c | ε
i j
9. L = {a b | j > i}

S → aSb

S → aCb

C → bC | ε

10. L = {wwR | w ϵ {0,1}*}


S → 0S0 | 1S1 | ε.
Derivation using a grammar:

Derivations
using grammar

Recursive
Derivations
Inference

Leftmost Rightmost
Derivations Derivations

Recursive inference:

 We apply the productions of a CFG to infer that certain strings are in the language
of a certain variable.
 We take strings known to be in the language of each of the variable of the body,
concatenate them in the proper order with any terminals appearing in the body and
infer that the resulting string is in the language of the variable in the head. This
procedure is called as recursive inference.

Example:

Now let us see how the string a*(a+b00) is inferred by the above grammar.

Step String inferred For language Production String(s) used


of used
i) a I 5 -
ii) b I 6 -
iii) b0 I 9 ii)
iv) b00 I 9 iii)
v) a E 1 i)
vi) b00 E 1 iv)
vii) a+ b00 E 2 v), vi)
viii) (a+ b00) E 4 vii)
ix) a*(a+ b00) E 3 v), viii)

Derivation:

Definition:

Derivation is a mechanism used to prove that a string is in the given


language represented by a grammar.

There are two types of derivation

 Top down derivation


 Bottom up derivation

In top down derivation (Starting NT to string)

1. The derivation begins with the start symbol.


2. A non terminal N is replaced by RHS of a production for which N is LH.
3. Step 2 is replaced till all the non-terminal are replaced.

In Bottom up derivation (String to starting NT);

1. The derivation begins with the string to be derived.


2. Part of the string ‘r’ is replaced with LHS (Non terminal) of a rule for which ‘r’ is
RHS.
3. Repeat step ii) is replaced till the start symbol of the grammar is reached.

Rules for algebraic expression:

A CFG is written for algebraic expression using the following rules:

1. Symbols ‘a’ and ‘b’ are identifiers.


2. Identifiers are algebraic expressions.
3. The binary operators + and * may be used to combine smaller algebraic
expressions and form larger algebraic expressions.
4. Parenthesis may be used to enclose algebraic expressions.
Example:
𝐸→𝑎
} Production derived from identifier.
𝐸→𝑏
𝐸 →𝐸+𝐸
𝐸 → 𝐸 ∗ 𝐸 } Production derived from algebraic expression.
𝐸 → (𝐸)

The grammar may be formally defined as

G = ( {E}, {a, b,+,*}, E, {E → a |b | E+E | E*E |(E)})

Example:

Derive the string a*b+a using top down derivation from the grammar G for
algebraic expression.

Solution:

E => E * E

E => E * E + E

E => a * E + E

E => a * b + E

E => a * b + a

Top down derivation are classified into

 Leftmost derivation
 Rightmost derivation

Leftmost derivation:

Leftmost derivation is one in which the leftmost non terminal is replaced by RHS of a
production whose LHS is the Non terminal that is being replaced.

One or more derivation steps recursively describes as ⇒
rm
Example:

String a*b+a
E =>E + E

 E*E+E
 a*E+E
 a *b+E
 a*b+a

Rightmost derivation:

One or more derivation steps recursively describes as ⇒
lm
Rightmost derivation is the one in which the rightmost non-terminal is replaced by RHS
of a production whose LHS is the non terminal that is being replaced.

Example:

String a*b+a

E => E + E

 E*E+E
 E*E+a
 E*b+a
 a*b+a

Example for the leftmost and rightmost derivation:

1. Derive the string (a+(a*a)) using leftmost and rightmost derivation for the
following production E → E + E | E * E | (E) | a.
Solution:

Leftmost derivation Rightmost derivation


E⇒ (E) E⇒ (E)
𝑙𝑚 𝑟𝑚
⇒ (E+E) ⇒ (E+E)
𝑙𝑚 𝑟𝑚
⇒ (a+E) ⇒ (E+(E))
𝑙𝑚 𝑟𝑚
⇒ (a+(E)) ⇒ (E+(E*E))
𝑙𝑚 𝑟𝑚
⇒ (a+(E*E)) ⇒ (E+(E*a))
𝑙𝑚 𝑟𝑚
⇒ (a+(a*E)) ⇒ (E+(a*a))
𝑙𝑚 𝑟𝑚
⇒ (a+(a*a)) ⇒ (a+(a*a))
𝑙𝑚 𝑟𝑚
E ⇒ ∗(a+(a*a)) E ⇒ ∗(a+(a*a))
𝑙𝑚 𝑟𝑚

2. Derive the string b*(b+a11) using leftmost and rightmost derivation for the
following production
E → I | E + E | E * E | (E)
I → a | b | Ia | Ib | I0 | I1
Solution:
Leftmost derivation Rightmost derivation
E⇒ E*E E⇒ E*E
𝑙𝑚 𝑟𝑚
⇒ I*E ⇒ E*(E)
𝑙𝑚 𝑟𝑚
⇒ b*E ⇒ E*(E+E)
𝑙𝑚 𝑟𝑚
⇒ b*(E) ⇒ E*(E+I)
𝑙𝑚 𝑟𝑚
⇒ b*(E+E) ⇒ E*(E+I1)
𝑙𝑚 𝑟𝑚
⇒ b*(I+E)) ⇒ E*(E+I11)
𝑙𝑚 𝑟𝑚
⇒ b*(b+E) ⇒ E*(E+a11)
𝑙𝑚 𝑟𝑚
⇒ b*(b+I) ⇒ E*(I+a11)
𝑙𝑚 𝑟𝑚
⇒ b*(b+I1) ⇒ E*(b+a11)
𝑙𝑚 𝑟𝑚
⇒ b*(b+I11) ⇒ I*(b+a11)
𝑙𝑚 𝑟𝑚
⇒ b*(b+a11) ⇒ b*(b+a11)
𝑙𝑚 𝑟𝑚

3. The following grammar generate the language for the expression 0*1(0+1)*
S → A1B
A → 0A | ε
B → 0B | 1B | ε

Derive the following strings using leftmost and rightmost derivation.

i. 00101
ii. 1001
iii. 00011
4. Consider the CFG and derive the string aabbab using leftmost and rightmost
derivation. S → aSbS | bSaS | ε.

Example: (Recursive Inference)

Derive the string (a*a) +a using recursive inference for the following CFG. E → E + E |
E → E * E | E → (E) | E → a
Solution:

1. E→E+E

2. E→E*E

3. E → (E)

4. E→a

Step String inferred For language Production String(s) used


of used
i) a E 4 -
ii) a*a E 2 i)
iii) (a*a) E 3 ii), iii)
iv) (a*a)+a E 1 i)

2. Derive the string 00101 using recursive inference for the following CFG.

1. S → A1B
2. A → 0A
3. A→ε
4. B → 0B
5. B → 1B
6. B→ε

Step String inferred For language Production String(s) used


of used
i) 0 A 2,3 -
ii) 00 A 2,3 i)
iii) 1 B 5,6 -
iv) 10 B 4,6 iii)
v) 101 B 5,6,4 i)
vi) 00101 S 1 ii), v)

CFG Language:

The context free grammar G = (V,T,P,S). The language of the grammar G denotes
as L(G) and it represents the set of terminal string that will be derived from the start

symbol and it is represented as L(G) = {w | w is in T and S ⇒ w}
Sentential forms:

 The string that are derived from the starting symbol of the grammar are said to be
sentential form.
 For example, if the grammar G = (V, T, P, S) is a CFG, then the string α which is
(V∪T)* such that {α} can be derived from the starting symbol S.

S ⇒α and α is the sentential form.
 A sentence is a sentential form with no variables.
 The types of sentential form are
o Left sentential form
o Right sentential form

Left sentential form:

If the string α can be generated from the starting symbol by using left most

derivation such that, S ⇒α, then α is the Left sentential form.
lm
Right sentential form

If the string α can be generated from the starting symbol by using right most

derivation such that, S ⇒α, then α is the right sentential form.
rm
Parse tree:

Definition:

The strings that are derived from the context free grammar can be represented in a
tree format known as parse tree or derivation tree.

Uses of parse tree:

 Parse tree can be used for both the derivation and recursive inference.
 Parse tree is used to find the ambiguity in grammars and languages.
 Parse tree is used in compiler which facilitates the translation of the source
program into an executable code.

Constructing parse tree:

Let us consider the grammar G = (V, T, P, S) and the parse tree for G is
constructed with the following conditions.

1. The root is labelled with the start symbol S.


2. Each interior node of the parse tree will be the NT of the grammar G.
3. Every vertex of the parse tree is labelled with either NT, T or ε.
4. Each leaf is labelled with terminal or ε.
5. If the leaf is ε, then it must be the only child of its parent.
6. If there is a production of the form A → X1,X2,...,Xn, then the vertex is labelled
as A and its child are labelled as X1,X2,...,Xnε from left to right.
7. If the production of the form A → ε, then the vertex is labelled as with child as ε
as the single child.
8. For example, if the production of the CFG has S → A, A → x1,x2,...xn | ε

S S

A
A (or)

ε
X1 X2 ......... Xn

Problem for parse tree:

1. Construct the parse tree for the string a*(a+b00) for the following CFG using both
leftmost and rightmost derivation.
E→I
E→E+E
E→E*E
E → (E)
I→a
I→b
I → Ia
I → Ib
I → I0
I → I1

Solution:

Input string: a*(a+b00)


Leftmost derivation Rightmost derivation

Example 2:

Construct the parse tree for the string a*(a+a) for the following CFG using both leftmost
and rightmost derivation. The CFG is E → E + E | E * E | (E) | a.

Solution:

Leftmost derivation Rightmost derivation


Example 3:

Construct the parse tree for the string 00101 for the following CFG using both leftmost
and rightmost derivation. The CFG is

S → A1B

A → 0A | ε

B → 0B | 1B | ε

Solution:

Leftmost derivation Rightmost derivation

Ambiguity in Grammars and Languages:

Definition:

A grammar G = (V, T, P, S) is said to be an ambiguous grammar, if there is some


string that it can generate in more than one way. That is, the string has more than one
parse tree or more than one leftmost derivation or more than one rightmost derivation.

A language is inherently ambiguous if it can only be generated by ambiguous


grammars.

Example:

The context free grammar A → A+A | A-A | a is ambiguous since there are two leftmost
derivations for the string a+a+a.

A→A+A A→A+A
A→ a + A A→ A + A + A
A→ a + A + A A→ a + A + A
A→ a + a + A A→ a + a + A
A→ a + a + a A→ a + a + a
Example 2;

The grammar is ambiguous since there are two parse trees for the string a+a-a.

A A

A A A A

A - A A A
a
a + -
a a a
a +
a

Example:

1. Check whether the following grammar is ambiguous.


E →E + E | E * E | (E) | a.
2. Consider the following grammar, S → aS | aSbS | ε, that is grammar is ambiguous
for the string ‘aab’ that has two parse tree, two leftmost derivations and rightmost
derivations.
Leftmost derivation-1 Leftmost derivation-2
S  aS S  aSbS
 aaSbS  aaSbS
 aaεbS  aaεbS
 aabS  aabS
 aabε  aabε
S  aab S  aab

Parse tree-1(Leftmost derivation) Parse tree-2 (Leftmost derivation)


S S

a S a S
c b

a S b S ε
a S

ε ε
ε
Rightmost derivation-1 Rightmost derivation-2
S  aS S  aSbS
 aaSbS  aSbε
 aaSbε  aSb
 aaSb  aaSb
 aaεb  aaεb
S  aab S  aab

Parse tree-1(Rightmost derivation) s


S S

a S a S
S b

a S b S ε
a S

ε ε
ε

Removing ambiguity from grammar: (Conversion from ambiguous to unambiguous);

 If there is a grammar G = (V, T, P, S) is unambiguous if each string ‘w’ has atmost


one parse tree and one derivation either leftmost or rightmost derivation.
 There are two causes for the ambiguous grammar and they are,
o The precedence of operators is not considered.
o A sequence of identical operators can group either from left or from the
right.
 The ambiguous grammar can be made as unambiguous grammar by introducing
several different variables for enforcing precedence.
 These different variable represent the following expressions that share a level of
binding strength.
 A factor is an expression that cannot be broken by any adjacent operator
either a* or a+. The factors of our expression language are
o Identifiers (i.e., it is not possible to separate the letters of an
identifier by attaching an operator).
o Parenthesized expression.
 A term is an expression that cannot be broken by the ‘+’ operator. A
term is a proof of one or more factors.
o Example: 1. a1+a*b cannot be broken into a*b
2. a1*a*b can be broken into a*b.
 An expression will refer to any possible expression including those that can
be broken by either an adjacent * or adjacent +.
 Expression = Sum of one or more terms.
 Term = Product of one or more factors.
 Factor = identifier (or) Parenthesized expression.
Inherent Ambiguity:
 A CFL L is said to be inherently ambiguous if all its grammars are ambiguous.
 If even one grammar for L is unambiguous, then L is an unambiguous language.
 The grammar is ambiguous; there is another grammar for the same language that
is unambiguous.
 Inherent ambiguous grammars are one for which unambiguous grammars do not
exists.
 If there exists a language L for which there is no unambiguous grammar, then the
language is called inherently ambiguous language and the grammar from which
the particular language is inferred is called inherently unambiguous grammar.

Example:

L = {anbncmdm | n ≥ 1, m ≥ 1} ∪ { anbmcmdn | n ≥ 1, m ≥ 1}. That is L consist of


strings in a+b+c+d+ such that earlier,

1. There are as many a’s as b’s and as many c’s as d’s.


2. There are as many a’s as d’s and as many b’s as c’s.
 L is a context free language.
 A grammar for an inherently ambiguous language
S →AB | C
A → aAb | ab
B → cBd | cd
C → aCd | aDd
D → bDc | bc.
 This grammar is ambiguous.
 For example, the string aabbccdd has the two leftmost derivations.
o S AB aAbB aabbB aabbcBd aabbccd
o S  C aCd  aaDdd  aabDCdd  aabbccdd
Parse tree-1 Parse tree-2
S S

A B
C

a A b c B d a d
s C

a b c d d
a D

b D c

b c

Push down Automata:

Introduction:

1. The regular languages have an equivalent automaton- the finite automata.


2. An automata equivalent to context free language is push down automata.
3. Finite automata cannot recognize all context free languages. Because some context
free language are not regular.
4. Finite automata have strictly finite memories whereas recognition of context free
language may require storing an unbounded amount of information.

Examples:

1. L = {anbn | n ≥ 0}. To handle this language, we must not only check that all a’s
precede the first b, we must also count the number of a’s. Since n is unbounded
this counting cannot be done with a finite memory. So we require PDA (Push
Down Automata), a machine that can count without limit.
2. L = {wwR | w ϵ {a,b}*}. To handle this language, we need more than unlimited
counting ability. We need to store and match a sequence of symbols in reverse
order.
3. Push down automata is a machine similar to finite automata that will accept
context free languages.

Definition of Pushdown automata:


 The pushdown automata will have input tape, finite control and stack.
 The input tape is divided in many cells.
 At each cell only one input symbol is placed thus certain input string is laced on
tape.
 The finite control has some pointer which points the current symbol which is to be
read.
 At the end of input $ or Δ or ƀ(blank) symbol is placed which indicates end of
input.

Input tape a a b b $ Δ Δ Δ Δ

Finite
control

Stack
Formal definition of a push down automata;

A push down automata M is defined by P=(Q,∑,┌,δ,q0,Z0,F).

Where,

 Q is a finite set of states.


 ∑ is an alphabet called the input alphabet which is the finite alphabet of tape
symbols.
 ┌ is an alphabet called stack alphabet which is a finite alphabets of stack symbols.
 q0ϵ Q is the start state or initial state.
 Z0ϵ ┌ is a particular stack symbol called start symbol (stack).
 F ⊆ Q is the set of final or favourable states.
 Δ is the transition relation. That is δ is a subset of (Q X (∑∪ {e})X ┌*) → 2QX┌*

Comparison of NFA and PDA:

NFA:

 (p,a,q) ϵΔ means if machine M is in state p, then on reading ‘a’ from input tape
go to state q.
 (p,ε,q) ϵΔ means if machine M is in state p, goes to sate q without consuming
input.
PDA:

1. ((p,a,β), (q,y)) → if machine M is in state p, the symbol read from input tape is ‘a’
and β is on top of stack, goes to state q, and replace β by ’y’ on top of stack.
2. ((s,a,ε),(s,a)) → if machine M is in state s. Reads ‘a’ remain in state s and push ‘a’
onto stack ε, empty stack.
3. ((s,c,ε),(f,ε)) → if read ‘c’ in state s and stack is empty, goes to final state f and
nothing to push onto stack.
4. ((s,ε,ε),(f,ε))→ if in state s, go to state f.
5. ((f,q,a), (f,φ)) → if read ‘a’ in state f, remain in state f and pop ‘a’ from stack.
6. PDA’s are non-deterministic.

Transitions:

1. Let ((p, a, β), (q, γ)) ϵ δ.


2. It means that we
a. Read ‘a’ from the tape.
b. Pop the string β from the stack.
c. Move from state p to state q.
d. Pushing γ onto stack.
3. We will draw it as

a,β/γ
p q

Pushing and popping:

1. When we push β, we must push the symbols of β as we read them right to left.
2. When we pop γ, we pop the symbols of γ as we read them left to right.

Acceptance by the PDA:

PDA has two ways to accept the strings.

1. By moving to the final state at the end of the string.


2. By emptying the stack at the end of the string.

Types of PDA:

1. Deterministic PDA (DPDA)


2. Non deterministic PDA (NPDA)
A graphical notation of PDA: (Transition diagram)

 The node corresponds to the states of PDA.


 An arrow labelled start indicates the start state and double circled states are
accepting as for finite automata,
 An arc labelled ‘a, X/α’ from state q to state p means that δ(q,a,X) contains the
pair (p,α), that is the arc label tells what input is used and also gives the old
and new top of the stack.

Example:

The PDA P = ({q0,q1,q2},{0,1},{0,1,Z0}, δ, q0,Z0, {q2}) is represented by the


following diagram.

0, Z0/0Z0
1, Z0/1Z0
0, 0/00
0, 1/01
1, 0/10 0, 0/ ε
1, 1/11 1, 1/ε

Start
q0 q1 q2
ε, Z0/Z0 ε, Z0/Z0
ε,0/0
ε, 1/1

Instantaneous description (ID) of PDA:

 The execution status of the push down automata is represented by the


Instantaneous description (ID) of PDA.
 The instantaneous description records the state, stack contents and the input
symbols.
 The ID’s defined as 3-tuple (q,a,γ), where,
o q → state of PDA
o a → remaining input
o γ→ stack contents
 If a PDA P = (Q, ∑, ┌, δ, q0, Z0, F) has the transition δ(q, a, x) = (p, α), then for all
the strings ‘w’ in ∑* and β in ┌*, the ID is given by
 (q, aw, xβ) (q, w,z0)(p, w, αβ)
 This means that by reading their input symbol ‘a’ at the state q with x as top stack
symbol replaces α for x and reaches the state ‘p’.

Moves of PDA:

The moves of the push down automata from current state to next state can be
represented by the symbol ├.

There are two types of moves.

 Moves by consuming an input symbol. (q, aw, xz) ├ (p, w, αz).


 Move by ε-transition. (p, aw, xz) (q, aw, yz)

Problems on transition diagram of PDA:

1. Suppose the PDA P = ({p, q},s {0,1}, {z0, x},δ, q,z0,{p}) has the following
transition functions:
a. (q,0,z0) = {(q, x)}
b. (q,0,x) = {(q, xx)}
c. (q,1,x) = {(q, x)}
d. (q, , x) = {(p,)}
e. (p, , x) = {(p,)}
f. (p, 1, x) = {(p,xx)}
g. (p, 1,) = {(p,)}

Starting from the initial ID (q, w, z0), show all the reachable instantaneous description
where the input string ‘w’ is as follows:

1. 01
2. 0011
3. 010

Solution:

1. String w=01
ID = (q, w,z0)
(q, 01,z0)├ (q, 1, xz0)
├ (q, , x)
├ (p, )
Now final state p is reached. So the string w=01 is accepted.
2. W=0011
ID = (q, w,z0)
(q, 0011,z0)├ (q, 011, xz0)
├ (q, 11,xxz0)
├ (q, 1,xxz0)
├ (q, ,xxz0)
├ (p, ,xz0)
├ (p, )
So the string 0011 is accepted.
3. W=010
(q, 010,z0)├ (q, 10,xz0)
├ (q, 0,xz0)
├ (q, ,xxz0)
├ (p, ,xz0)
├ (p, )
So the string 010 is accepted.
2. Consider the following PDA and find whether the string 000111 and 00011 is
accepted.

0,z0 / 0z0
0,0 /00
1, 0 /ε
Start 1, 0 /ε ε, z0 /ε
p q r

ε, z0 /ε
Solution:

Transition function:

(p, 0, z0) = (p, 0z0)

(p, 0, 0) = (p, 00)

(p, 1, 0) = (q, )

(p, ,z0) = (r, )

(q, 1, 0) = (q, )

(q, , z0) = (r, )


String w = 000111

(p, 000111, z0) ├(p, 00111, 0z0)

├(p, 0111, 00z0)

├(p,111, 000z0)

├(q, 11, 00z0)

├(q, 1, 0z0)

├(q, , z0)

├ (r,)

So the string 000111 is accepted.

String w = 00011

(p, 000111, z0)├(p, 00011, 0z0)

├(p, 0011, 00z0)

├(p,11, 000z0)

├(q, 1, 00z0)

├(q, , 0z0)

There is no transition function (q, , 0), so the string is not accepted.

The language of a PDA:

 The language of PDA is the set of all string accepted by the PDA.
 Two methods of accepting a string in the PDA is as follows:
o Acceptance by final state.
o Acceptance by empty stack.

Acceptance by final state:

 The string is accepted by the PDA after consuming the input, and then the PDA
should enter the final state. This method is accepting the language by reaching
final state.
 Let P = (Q, ∑, ┌, δ, q0, Z0, F) be a PDA, then the language L(P) accepted
by final state is L(P) = {w | (q0,w,z0) * (q, ε, α)}
P
o Where
q → final state (q ϵ F)
α → any stack symbol. (α ϵ ┌)

Acceptance by empty stack:

 After reading the input, the stack should be empty, then the language is accepted
by PDA.
 Let P = (Q, ∑, ┌, δ, q0, Z0, F) be a PDA, then the language N(P) accepted by final
state is N(P) = {w | (q0,w,z0) * (p, ε, ε)}
o Where P
 p → any state in Q

Converting a language accepted by empty stack to final state:

Theorem:

If L = N(PN) for some PDA, where PN = (Q, ∑, ┌, δ N, q0, Z0) then there is a PDA PF such
that L = L(PN).

To prove: (PN to PF)

If there exists a PDA PN that accepts a language by empty stack, then there exists
PDA PN that accepts a language by reaching final state.

Proof:

 To prove this theorem, we use a symbol X0 which must not be a symbol in ┌, that
is, (x0 ϵ ┌*).
 Here we are going to simulate P F from PN that is we are going to construct PN.
 Here x0 is used as the starting top symbol of the stack.
 X0 is the symbol marked on the bottom of the stack for PN.
 PN goes on processing the input if it sees any other symbol on the stack except the
symbol x0.
 If PN sees x0, then it finishes processing the string.
 Now we are going to construct PF with new starting state P0 and new final state PF.
 P0 which is the starting state of PF is used to push the stack to make x0at the
bottom of the stack, when it reads x0 symbol and enters the initial state.
 PF simulates PN and accepts if PN empties its stack.

ϵ,x0 / ϵ

ϵ,x0 / ϵ
Start
P0 q0 ϵ,x0 / ϵ PF
PN

ϵ,x0 / ϵ

 The function of the new starting state P0 is to push the symbol Z0 to the stack
when it sees x0 on the top of the stack and enters the state q0.
 The q0 state is the initial state of PN. So PF simulates PN until PN empties its stack.
PF detects that the PN emptied stack when PF sees x0 on the top of the stack. So
when the PDA PF sees x0 on the top of the stack, it reaches the new final state Pf of
PDA PF.
 So the PDA PF is given by
PF = (Q ∪ {P0, PF}, ∑, ┌∪{x0}, δF, p0, x0,{ PF})
Where δF is defined as
1. δF (P0, ε, x0) = (q0, z0, x0) PF pushes x0 to the bottom of the stack.
2. For all state q in Q, input ‘a’ in ∑ or a=ε and any stack symbol ‘y’ in ┌, δF(q, a, y)
contains all the pairs in δN(q, a, y)
3. δF(q, ε, x0) = (pf, ε), accepts the string by reaching final states.
So from the above theorem, we can conclude that ‘w’ is in L(PN). So by
combining all the moves, the instantaneous description of the PF after simulating PN is
given by
(P0, w, x0) ├ (P0, w, x0 z0) ├ (q, ε, x0) ├ (pf, ε, ε)
Thus the PDA Pf accepts ‘w’ by final state.

Converting a language accepted by reaching a final state to empty stack:

Theorem:
If L is a PDA L(PF) for some PDA PF = (Q, ∑, ┌, δ F, q0, Z0, F) then there is a
PDA PN such that L = N(PN).

To prove:

We have to prove that there exists a PDA P N if and only if there exists a
PDA PF.

Proof:

 To prove this theorem, here also we use a stack symbol x0 on the bottom of the
stack.
 Here we are going to construct PN from PF.
 Here q0 is the initial state of PF and we have a set of final state for Pf.
 So the PDA Pf enter the final state after accepting the string ‘w’ with leaving any
symbols on the stack.
 So our target is to delete all the stack symbol of PDA PF after entering the final
state in order to construct PN by empty stack.
 Now construct PN with P0 as the new start state and P as new final state.
 The transition diagram of PN is shown below:

You might also like