Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Module 6.

Applications of
Automata
Power and Limitations of Regular and context free grammar
and machines

Model Power Limitation Application Area

Finite State  It perform lexical  It is designed  It can model


Machines analysis only for regular software that
 It performs expressions decides whether
pattern matching  The language or not online user
 It can accept valid accepted is input such as
strings and reject Regular language email addresses
invalid strings only. are valid.
 One substring can  It supports  Search engine
be pumped any following optimization
number of times grammars only, algorithms
xyiz i) Right linear
grammar
A  aB |a

ii)Left linear

grammar

A  Ba |a

 Reason: It does
not support
backtracking
 It does not have
memory
Push down  It recognize the  It cannot Check Palindrome,
Automata syntax generate strings parenthesis checking
 It generates parse  It cannot
tree for valid perform
strings computation
 Two substrings  It supports
can be pumped in Context free
tandem any grammar
Aα
number of times α = (T U V)*
i i
uv xy z  Reason: Single
 Memory is stack can handle
provided in form only one
of stack push/pop pair
Turing Machines  It follows  It is used for Solve any algorithms
Unrestricted recursive and and all computation
grammar recursively operations.
αβ enumerable
 It can perform languages
computations  It can give false
 Manipulate finite lower bound on Ex: binary search,
amount of data in certain addition,
finite amount of algorithms subtraction,
time  Reason: It does multiplication,
 Enlarged storage not model the
logical AND, OR, log,
space strength of a
etc.
 It make use of particular
infinite tape arrangement well
memory
 It can traverse in
both left and
right direction
Notes on Module VI: Applications

18
Automata Theory ITC405, Revised Syllabus
(2016)

20
Archana Kale∗

an
Department of Information Technology,
Thadomal Shahani Engineering College,
Bandra(west), Mumbai 400050

,J
1 Introduction
EC
TS
These notes introduce conversion of grammar and automata to a simple abstract
program. This is the first view of implementation of designs taught in Automata
Theory. These are abstract codes with general syntactical assumptions. Sec-
tion 2 shows code for regular language. Section 3 shows code for context free
e,

language. Last section, section 4 shows code for Turing Machines. .


al

Also, thanks to syllabus committee for this opportunity and to Prof. Darshan
Ingle, Dept. of I.T. TSEC, for proof reading of these notes.
K

2 Regular Language
a

Four examples of codes for regular languages are presented in this section. Sec-
an

tion 2.1 shows conversion of regular grammar to an accepter /verifier program.


Section 2.2 shows conversion of finite automata to an accepter / verifier pro-
gram. Section 2.3 shows conversion of Moore machine to a translator program.
ch

Section 2.4 shows conversion of Mealy machine to binary adder.


Ar

Note: ”input” is the input string with ’ch’ as the current character. Function
next() changes current character position to next character position. ”endinput”
marks end of input.

∗ archana.kale@tsec.edu

Copy Right: Archana Kale, TSEC

1
2.1 Regular Grammar
P
L1 ⇒ Language of strings over = {a,b} having substring ”abc”.

Rules:.... S → bS / ab / abQ / aS

18
............. Q → a / aQ / b / bQ

A grammar can be easily converted to functions by converting non terminals or

20
variables to functions and rules to executable statements.

Application: Acceptor / Verifier Code:

an
acceptor(input)
{
result = S(ch);

,J
if ( result == true ) print("Accepted")
else print("Rejected");
}

Boolean S(ch)
{
EC
if ( ch == ’b’ ) { next(); return( S(ch); ); }
TS
if ( ch == ’a’)
{
next();
if ( ch == ’b’ )
e,

{
next();
al

if ( endinput ) return( true );


else return( Q(ch); );
K

}
else return( S(ch); );
}
a

else return( false );


an

Boolean Q(ch)
ch

{
if ( ch == ’a’)
{
Ar

next();
if ( endinput ) return( true );
else return( Q(ch); );
}
if ( ch == ’b’ )
{
next();

2
if ( endinput ) return( true );
else return( Q(ch); );
}
else return( false );
}

18
2.2 Finite Automata

20
Figure 1 shows finite automata for L1 . An automata can be easily converted
to functions by converting States to functions and Transitions to to executable
statements.

an
b a a

,J
a b
start S P Q b

EC
Figure 1: F.A. for L1

Application: Acceptor / Verifier Code:


TS
acceptor(input)
{
result = S(ch);
e,

if ( result == true ) print("Accepted")


else print("Rejected");
al

}
K

Boolean S(ch)
{
if ( ch == ’b’ ) { next(); return( S(ch); ); }
a

if ( ch == ’a’ ) { next(); return( P(ch); ); }


else return( false );
an

}
ch

Boolean P(ch)
{
if ( ch == ’a’ ) { next(); return( P(ch); ); }
Ar

if ( ch == ’b’ ) { next(); return( Q(ch); ) }


else return( false );
}

Boolean Q(ch)
{
if ( endinput ) return( true );

3
if ( ch == ’a’ ) { next(); return( Q(ch); ); }
if ( ch == ’b’ ) { next(); return( Q(ch); ); }
else return( false );
}

18
2.3 Moore Machine
This section shows the working of Moore machine as a translator. Let LP 2 ⇒

20
P
Language of strings over = {a,c} and L3 ⇒ Language of strings over =
{u,k}. Translating L2 to L3 by replacing every occurrence of ’c’ in L2 by ’k’ in
L3 and ’a’ in L2 by ’u’ in L3 .

an
Figure 2 shows Moore machine as translator:

,J
a

EC a
P/u

c c
TS
a
c
start S/ Q/k

Figure 2: Moore translator machine


e,
al

Application: Translator Code:


K

translator(input)
{
a

result = S(ch);
an

if ( result == true ) print("Translated")


else print("Error in input");
}
ch

Boolean S(ch)
{
Ar

if ( endinput )
{
print("Null String");// Null string.
return( true );// Nothing to translate
}
if ( ch == ’a’ ) { next(); return( P(ch); ); }
if ( ch == ’c’ ) { next(); return( Q(ch); ); }

4
else return( false );
}

Boolean P(ch)
{

18
print(’u’);
if ( endinput ) return( true );
if ( ch == ’a’ ) { next(); return( P(ch); ); }

20
if ( ch == ’c’ ) { next(); return( Q(ch); ); }
else return( false );
}

an
Boolean P(ch)
{

,J
print(’k’);
if ( endinput ) return( true );
if ( ch == ’a’ ) { next(); return( P(ch); ); }

}
else return( false ); EC
if ( ch == ’c’ ) { next(); return( Q(ch); ); }
TS
2.4 Mealy Machine
This section, figure 3 shows working of Mealy machine as a 2 bit adder. It adds
A + B.
e,

0+0=0 1+1=1
al

1+1=0
K

start NC 1+0=1 C 1+0=0

0+1=1 0+0=1 0+1=0


a
an

Figure 3: Mealy binary adder machine


ch

Note: ”inputA” and ”inputB” are strings representing binary numbers A and
Ar

B. ’a’ and ’b’ represent bits in ”inputA” and ”inputB” respectively. function
next() changes current bit position of both numbers to next bit position. ”end-
inputs” marks end of inputs. Result is printed from LSB to MSB.

Application: Adder Code:


adder(inputA, inputB)

5
{
result = NC(a,b);
if ( result == true ) print("Added")
else print("Error in input");
}

18
Boolean NC(ch)
{

20
if ( endinputs ) return( true );
if ( ( a == ’0’) AND ( b == ’0’) )
{ print(0); next(); return( NC(a,b); ); }

an
if ( ( a == ’0’) AND ( b == ’1’) )
{ print(1); next(); return( NC(a,b); ); }
if ( ( a == ’1’) AND ( b == ’0’) )

,J
{ print(1); next(); return( NC(a,b); ); }
if ( ( a == ’1’) AND ( b == ’1’) )
{ print(0); next(); return( C(a,b); ); }

}
else return( false );
EC
Boolean C(ch)
TS
{
if ( endinputs )
{
print(1); // printing last carry
e,

return( true );
}
al

if ( ( a == ’0’) AND ( b == ’0’) )


{ print(1); next(); return( NC(a,b); ); }
K

if ( ( a == ’0’) AND ( b == ’1’) )


{ print(0); next(); return( C(a,b); ); }
if ( ( a == ’1’) AND ( b == ’0’) )
a

{ print(0); next(); return( C(a,b); ); }


an

if ( ( a == ’1’) AND ( b == ’1’) )


{ print(1); next(); return( C(a,b); ); }
else return( false );
ch

}
Ar

3 Context Free Language


Two examples of context free language are presented in this section. Section
3.1 shows conversion of context free grammar to an accepter /verifier program.
Section 3.2 shows conversion of Pushdown automata to a accepter /verifier pro-
gram.

6
3.1 Context Free Grammar
= {a,b} such every string is of the form an bn ,
P
L4 ⇒ Language of strings over
for n > 0.

18
Rules: S → ab / aSb

Note: ”input” is the input string with ’chl’ as the current left character be-

20
ginning with first position and ’chr’ as the current right character beginning
with last position. Function next() changes current left character position to
its right and function prev() changes current right character position to its left.
Stack is not used in the code.

an
Application: Acceptor / Verifier Code:

,J
acceptor(input)
{
result = S(chl,chr);

}
else print("Rejected"); EC
if ( result == true ) print("Accepted")
TS
Boolean S(chl,chr)
{
if ( (chl == ’a’) AND ( chr == ’b’ ) )
{
e,

if ( chl == chr - 1 ) return( true ); // center of string


next(); prev();
al

return( S(chl,chr); );
}
K

else return( false );


}
a
an
ch
Ar

7
Current Next
state input top → state stack Remarks
S0 a # → S1 a# push(a)
S1 b aX → S2 X pop()
S1 a a → S1 aa push(a)

18
S2 b aX → S2 X pop()
S2 endinput # → Accept #

20
Table 1: Pushdown Automata for L4

an
3.2 Pushdown Automata
Tabel 1 shows Pushdown Automata for L4 ⇒ { an bn | n > 0 }, over
P
=

,J
{a,b}. Function initialize-stack() initializes stack with # as top of empty stack.
Supporting functions are push(character) and pop(). X is general stack symbol.

acceptor(input)
{
EC
Application: Acceptor / Verifier Code:
TS
initialize-stack();
result = S0(ch);
if ( result == true ) print("Accepted")
else print("Rejected");
}
e,

Boolean S0(ch)
al

{
if ( (ch == ’a’) AND ( top == ’#’ ) ) { push(ch); next(); return( S1(ch); ) }
K

else return( false );


}
a

Boolean S1(ch)
an

{
if ( (ch == ’b’) AND ( top == ’a’ ) ) { pop(); next(); return( S2(ch); ) }
if ( (ch == ’a’) AND ( top == ’a’ ) ) { push(ch); next(); return( S1(ch); ) }
ch

else return( false );


}
Ar

Boolean S2(ch)
{
if ( (ch == ’b’) AND ( top == ’a’ ) ) { pop(); next(); return( S2(ch); ) }
if ( ( endinput ) AND ( top == ’#’ ) ) return( true );
else return( false );
}

8
P
→ $ + 1
state ↓
S0 S1 , Right, $ – –
S1 – S2 , Right, 1 S1 , Right, 1
S2 S3 , Left, – S2 , Right, 1

18
S3 – – Accept, $

Table 2: Single-tape TM for unary adder

20
4 Turing Machine

an
Three examples of Turing Machines are presented in this section. Section 4.1
shows conversion of single tape unary adder Turing machine to a program.

,J
Section 4.2 shows conversion of multi tape Turing machine for binary AND to a
program. Section 4.3 shows conversion of multi track Turing machine for binary
OR to a program.

4.1 Single Tape


EC
Table 2 shows a single-tape Turing machineP which adds 2 unary numbers and
TS
stores result in C, i.e. C = A + B. Unary ={P1 }. Null string represents
number zero. Delimiters and addition symbol make = { 1 , + , $ }The tran-
sition function results in 3 tuple , { next state, movement of tape (Left / Right),
result symbol }. function left() and right() change current head position to its
e,

left and right position respectively. $ marks end of inputs. Initial contents of
Tape are $A+b$ and result is $C$. Begin at left most $ with start state S0 . For
al

example input 11 + 111 will result in output 11111. The ’+’ sign is converted
to 1, last 1 to $ and last $ to blank.
K

Application: Unary Adder Code:


a

adder(Tape)
{
an

head = position( right most $ );


result = S0(head);
if ( result == true ) print("Added, Result C is on tape")
ch

else print("Error in input");


}
Ar

Boolean S0(head)
{
if ( head == ’$’ ) { right(); return( S1(head); ); }
else return( false );
}

9
Boolean S1(head)
{
if ( head == ’+’ ) { write(1); return( S2(head); ); }
if ( head == 1 ) { right(); return( S1(head); ); }
else return( false );

18
}

Boolean S2(head)

20
{
if ( head == ’$’ ) { write(blank); return( S3(head); ); }
if ( head == 1 ) { right(); return( S2(head); ); }

an
else return( false );
}

,J
Boolean S3(head)
{
if ( head == 1) { write(’$’); return( true ); }

}
else return( false );
EC
4.2 Multi Tape
TS
Table 3 shows a multi-tape Turing machine which performs bitwise AND on 2
numbers A and B and stores result in tape C, i.e. C = A AND B. One tape
stores one data item. ’a’, ’b’ and ’c’ represent bits in A, B and C respectively.
e,

The transition function results in 3 tuple , { next state, movement each tape
(Left / Right), result bit }. Function left(Tape) and right(Tape) changes current
al

head position of tape to left or right position respectively. Function write( head,
symbol ) writes given symbol at current position of tape head. $ marks ends of
K

data on tape. Result is generated and printed from LSB to MSB. Suitable data
structure for a tape would be a list ( array or linked list).
a

This AND operation is similar to manual AND:


............ 1 0 1 0 → A
an

. AND . 1 0 0 1 → B
............ 1 0 0 0 → C
ch
Ar

10
a→ 0 0 1 1 $
b→ 0 1 0 1 $
c→
state ↓
Left, 0 Left, 0 Left, 1 Left, 1 $

18
S Left, 0 Left, 1 Left, 0 Left, 1 $
S, Left, 0 S, Left, 0 S, Left, 0 S, Left, 1 Accept

20
Table 3: Multi-tape TM for bitwise AND

an
Application: Code for Bitwise AND:
f_and(Tape A, Tape B, Tape C)

,J
{
heada = position( LSB of A ); headb = position( LSB of B );
headc = position( LSB of C ); result = S( heada , headb ,headc );

}
else print("Error in input"); EC
if ( result == true ) print("And performed, Result s on tape C")
TS
Boolean S( heada , headb ,headc )
{
if ( ( heada == 0 ) AND ( headb == 0 ) )
{
e,

write( headc , 0 ); left(C);


left(A); left(B); return( S( heada , headb ,headc ); );
al

}
if ( ( heada == 0 ) AND ( headb == 1 ) )
K

{
write( headc , 0 ); left(C);
left(A); left(B); return( S( heada , headb ,headc ); );
a

}
an

if ( ( heada == 1 ) AND ( headb == 0 ) )


{
write( headc , 0 ); left(C);
ch

left(A); left(B); return( S( heada , headb ,headc ); );


}
if ( ( heada == 1 ) AND ( headb == 1 ) )
Ar

{
write( headc , 1 ); left(C);
left(A); left(B); return( S( heada , headb ,headc ); );
}
if( ( heada == ’$’ ) AND ( headb == ’$’ ) ) return( true );
else return( false );
}

11
a→ 0 0 1 1 $
b→ 0 1 0 1 $
c→
state ↓
0 0 1 1 $

18
S 0 1 0 1 $
S, Left, 0 S, Left, 1 S, Left, 1 S, Left, 1 Accept

20
Table 4: Multi-track TM for bitwise OR

an
4.3 Multi Track
Table 4 shows a multi-track Turing machine which performs bitwise OR on 2

,J
numbers A and B and stores result on track C, i.e. C = A OR B. One track
stores one data item. ’a’, ’b’ and ’c’ represent bits in A, B and C respectively.
The transition function results in 3 tuple , { next state, movement of track (Left

EC
/ Right), result bit }. Function left() and right() move track heads to left or
right position respectively. Function write( head, symbol ) writes given symbol
at current position of track head. $ marks ends of data on tracks. Result is
generated and printed from LSB to MSB. Suitable data structure for a tracks
TS
would be a list of structure containing 1 data item per track ( array or linked
list).

This OR operation is similar to manual OR:


e,

............ 1 0 1 0 → A
.. OR .. 1 0 0 1 → B
al

............ 1 0 1 1 → C
Application: Code for Bitwise OR:
K

f_or(Tracks A, B and C)
{
head = position( LSB );
a

result = S( head );
an

if ( result == true ) print("And performed, Result s on tape C")


else print("Error in input");
}
ch

Boolean S( head )
Ar

{
if ( ( head.a == 0 ) AND ( head.b == 0 ) )
{
write( head.c , 0 );
left(); return( S( head ); );
}
if ( ( head.a == 0 ) AND ( head.b == 1 ) )

12
{
write( head.c , 1 );
left(); return( S( head ); );
}
if ( ( head.a == 1 ) AND ( head.b == 0 ) )

18
{
write( head.c , 1 );
left(); return( S( head ); );

20
}
if ( ( head.a == 1 ) AND ( head.b == 1 ) )
{

an
write( headc , 1 );
left(); return( S( head ); );
}

,J
if( ( head.a == $ ) AND ( head.b == $ ) ) return( true );
else return( false );
}

EC
TS
e,
al
K
a
an
ch
Ar

13
Extra Material for reference
Power and limitations of Finite State Machine

a deterministic finite automaton (DFA)—also known as a deterministic finite acceptor (DFA)


and a deterministic finite state machine (DFSM) or a deterministic finite state
automaton (DFSA)—is a finite-state machine that accepts and rejects strings of symbols and only
produces a unique computation (or run) of the automaton for each input string.[1] Deterministic refers
to the uniqueness of the computation. In search of the simplest models to capture finite-state
machines,

A DFA is defined as an abstract mathematical concept, but is often implemented in hardware and
software for solving various specific problems. For example, a DFA can model software that decides
whether or not online user input such as email addresses are valid.[4]
DFAs recognize exactly the set of regular languages,[1] which are, among other things, useful for
doing lexical analysis and pattern matching. DFAs can be built from nondeterministic finite
automata (NFAs) using the powerset construction method.

DFAs are equivalent in computing power to nondeterministic finite automata (NFAs).

On the other hand, finite state automata are of strictly limited power in the languages they can
recognize; many simple languages, including any problem that requires more than constant space to
solve, cannot be recognized by a DFA.

Power and limitations of DFA and Turing Machine

The major distinction between how DFAs (Deterministic Finite Automaton) and TMs work is
in terms of how they use memory.

Intuitively, DFAs have no "scratch" memory at all; the configuration of a DFA is entirely
accounted for by the state in which it currently finds itself, and its current progress in
reading the input.

Intuitively, TMs have a "scratch" memory in the form of tape; the configuration of a TM
consists both of its current state and the current contents of the tape, which the TM may
change as it executes.

A DFA may be thought of as a TM that neither changes any tape symbols nor moves the
head to the left. These restrictions make it impossible to recognize certain languages which
can be accepted by TMs.

Note that I use the term "DFA" rather than "FSM", since, technically, I'd consider a TM to be
a finite-state machine, since TMs by definition have a finite number of states. The difference
between DFAs and TMs is in the number of configurations, which is the same as the
number of states for a DFA, but is infinitely great for a TM.
Turing Machines describe a much larger class of languages, the class of recursively
enumerable languages. Finite state machines describe the class of regular languages.

Finite state machines have no "memory", it is limited by its states.

A finite-state machine is a restricted Turing machine where the head can only perform
"read" operations, and always moves from left to right.

Take this language as an example:

L={aibi| i>=0}L={aibi| i>=0}


Because finite states machines are limited in the sense that they have no memory, a FSM
that accepts L can't be constructed.

To summarize:

Finite state machines describe a small class of languages where no memory is needed.

Turing Machines are the mathematical description of a computer and accept a much larger
class of languages than FSMs do.

Turing Machines have has more computational power than FSM. There are tasks which no
FSM can do, but which Turing Machines can do.

FA vs. PDA vs. TMs


FA: Bounded input and finite set of states;
Can only read and move to the right;
After reading the word it decides whether to accept or not.

PDA: Bounded input and finite set of states;


Can only read and move to the right;
Stack with unbounded memory and LIFO access (last in-first out);
After reading the word it decides whether to accept or not.

TM: Unbound input and finite set of states;


Random access in unbounded memory;
Can read and write, and move left and right;
If the TM is in a final state when there are no more
movements, then the input is accepted.
Turing Machine

Power of Turing Machine


The turing machine has a great computational capabilities. So it can be used as a
general mathematical model for modern computers.
Turing machine can model even recursively enumerable languages. Thus the
advantage of turing machine is that it can model all the computable functions as well as
the languages for which the algorithm is possible.
Limitations of Turing Machine
1. Computational complexity theory
a. A limitation of Turing machines is that they do not model the strengths of a particular
arrangement well.
b. For instance, modern stored-program computers are actually instances of a more
specific form of abstract machine known as the random access stored program.
c. An experimental prototype to machine or RASP machine model. Like the Universal
Turing machine the RASP stores its "program" in "memory" external to its finite-state
achieve Turing machine machine's "instructions".
d. The RASP's finite-state machine is equipped with the capability for indirect
addressing thus the RASP's "program" can address any register in the register-
sequence. The upshot of this distinction is that there are computational optimizations
that can be performed based on the memory indices, which are not possible in a
general Turing machine; thus when Turing machines are used as the basis for bounding
running times, a 'false lower bound' can be proven on certain algorithms' running times.
An example of this is binary search, an algorithm that can be shown to perform more
quickly when using the RASP model of computation rather than the Turing machine
model.
2. Concurrency a. Another limitation of Turing machines is that they do not model
concurrency well.
b. For example, there is a bound on the size of integer that can be computed by an
always halting nondeterministic Turing machine starting on a blank tape.
c. By contrast, there are always-halting concurrent systems with no inputs that can
compute an integer of unbounded size.

Comparison with real machines[edit]

A Turing machine realisation in Lego


It is often said[who?] that Turing machines, unlike simpler automata, are as powerful as real machines,
and are able to execute any operation that a real program can. What is neglected in this statement is
that, because a real machine can only have a finite number of configurations, this "real machine" is
really nothing but a linear bounded automaton. On the other hand, Turing machines are equivalent
to machines that have an unlimited amount of storage space for their computations. However, Turing
machines are not intended to model computers, but rather they are intended to model computation
itself. Historically, computers, which compute only on their (fixed) internal storage, were developed
only later.
There are a number of ways to explain why Turing machines are useful models of real
computers:

1. Anything a real computer can compute, a Turing machine can also compute. For example:
"A Turing machine can simulate any type of subroutine found in programming languages,
including recursive procedures and any of the known parameter-passing mechanisms"
(Hopcroft and Ullman p. 157). A large enough FSA can also model any real computer,
disregarding IO. Thus, a statement about the limitations of Turing machines will also apply to
real computers.
2. The difference lies only with the ability of a Turing machine to manipulate an unbounded
amount of data. However, given a finite amount of time, a Turing machine (like a real
machine) can only manipulate a finite amount of data.
3. Like a Turing machine, a real machine can have its storage space enlarged as needed, by
acquiring more disks or other storage media. If the supply of these runs short, the Turing
machine may become less useful as a model. But the fact is that neither Turing machines
nor real machines need astronomical amounts of storage space in order to perform useful
computation. The processing time required is usually much more of a problem.
4. Descriptions of real machine programs using simpler abstract models are often much more
complex than descriptions using Turing machines. For example, a Turing machine
describing an algorithm may have a few hundred states, while the equivalent deterministic
finite automaton (DFA) on a given real machine has quadrillions. This makes the DFA
representation infeasible to analyze.
5. Turing machines describe algorithms independent of how much memory they use. There is a
limit to the memory possessed by any current machine, but this limit can rise arbitrarily in
time. Turing machines allow us to make statements about algorithms which will
(theoretically) hold forever, regardless of advances in conventional computing machine
architecture.
6. Turing machines simplify the statement of algorithms. Algorithms running on Turing-
equivalent abstract machines are usually more general than their counterparts running on
real machines, because they have arbitrary-precision data types available and never have to
deal with unexpected conditions (including, but not limited to, running out of memory).

An experimental prototype of a Turing machine

Limitations of Turing machines[edit]


Computational complexity theory[edit]
Further information: Computational complexity theory
A limitation of Turing machines is that they do not model the strengths of a particular arrangement
well. For instance, modern stored-program computers are actually instances of a more specific form
of abstract machine known as the random-access stored-program machine or RASP machine
model. Like the universal Turing machine the RASP stores its "program" in "memory" external to its
finite-state machine's "instructions". Unlike the universal Turing machine, the RASP has an infinite
number of distinguishable, numbered but unbounded "registers"—memory "cells" that can contain
any integer (cf. Elgot and Robinson (1964), Hartmanis (1971), and in particular Cook-Rechow
(1973); references at random access machine). The RASP's finite-state machine is equipped with
the capability for indirect addressing (e.g. the contents of one register can be used as an address to
specify another register); thus the RASP's "program" can address any register in the register-
sequence. The upshot of this distinction is that there are computational optimizations that can be
performed based on the memory indices, which are not possible in a general Turing machine; thus
when Turing machines are used as the basis for bounding running times, a 'false lower bound' can
be proven on certain algorithms' running times (due to the false simplifying assumption of a Turing
machine). An example of this is binary search, an algorithm that can be shown to perform more
quickly when using the RASP model of computation rather than the Turing machine model.
Concurrency
Another limitation of Turing machines is that they do not model concurrency well. For example, there
is a bound on the size of integer that can be computed by an always-halting nondeterministic Turing
machine starting on a blank tape. (See article on unbounded nondeterminism.) By contrast, there
are always-halting concurrent systems with no inputs that can compute an integer of unbounded
size. (A process can be created with local storage that is initialized with a count of 0 that
concurrently sends itself both a stop and a go message. When it receives a go message, it
increments its count by 1 and sends itself a go message. When it receives a stop message, it stops
with an unbounded number in its local storage.)
Interaction
In the early days of computing, computer use was typically limited to batch processing, i.e., non-
interactive tasks, each producing output data from given input data. Computability theory, which
studies computability of functions from inputs to outputs, and for which Turing machines were
invented, reflects this practice.
Since the 1970s, interactive use of computers became much more common. In principle, it is
possible to model this by having an external agent read from the tape and write to it at the same time
as a Turing machine, but this rarely matches how interaction actually happens; therefore, when
describing interactivity, alternatives such as I/O automata are usually preferred

You might also like